CN114663842B - Image fusion processing method and device, electronic equipment and storage medium - Google Patents

Image fusion processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114663842B
CN114663842B CN202210571779.7A CN202210571779A CN114663842B CN 114663842 B CN114663842 B CN 114663842B CN 202210571779 A CN202210571779 A CN 202210571779A CN 114663842 B CN114663842 B CN 114663842B
Authority
CN
China
Prior art keywords
image
image data
processing
data
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210571779.7A
Other languages
Chinese (zh)
Other versions
CN114663842A (en
Inventor
张乐
周承涛
杨作兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen MicroBT Electronics Technology Co Ltd
Original Assignee
Shenzhen MicroBT Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen MicroBT Electronics Technology Co Ltd filed Critical Shenzhen MicroBT Electronics Technology Co Ltd
Priority to CN202210571779.7A priority Critical patent/CN114663842B/en
Publication of CN114663842A publication Critical patent/CN114663842A/en
Application granted granted Critical
Publication of CN114663842B publication Critical patent/CN114663842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image fusion processing method, apparatus, device, and storage medium, the image fusion processing method including: acquiring first original image data; performing image processing on the first original image data to obtain first image data; obtaining second original image data containing only the region of interest from the first original image data; performing feature enhancement processing on the second original image data to obtain second image data; and carrying out image fusion on the first image data and the second image data to obtain fused image data. The scheme disclosed by the invention is specially used for carrying out feature enhancement processing on the image data of the region of interest on the basis of the traditional image processing flow, the features in the region of interest are not influenced by the overall processing process of the image, the problem that the local and overall feature expression of the image cannot be considered in the image processing process is solved, and the end-to-end local feature enhancement of the monitoring image can be realized in the practical application.

Description

Image fusion processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image fusion processing method and apparatus, an electronic device, and a storage medium.
Background
In the fields of target tracking, detection alarm, violation punishment, parking charge, highway charge and the like, a specific target in a monitoring image needs to be accurately identified. Due to the influences of factors such as shooting angle, illumination condition, weather and lens pollution, in the original monitoring image, the specific target can cause an unsatisfactory imaging condition due to the influences of factors such as definition, brightness and noise, and finally, the accurate identification result of the specific target is reduced. Therefore, after the original monitoring image is obtained, the original monitoring image needs to be processed to control the definition, brightness and noise within the range meeting the requirements.
However, in the process of processing the original monitoring image, the local and overall feature expression of the image cannot be considered at the same time. Taking brightness adjustment as an example, in an original monitoring image with strong light-dark contrast, when a specific target is in a dark region of the original monitoring image, the brightness of the original monitoring image needs to be enhanced to highlight the characteristic of the specific target, in this case, while the brightness of the original monitoring image is enhanced, the bright region of the original monitoring image is further enhanced, and further the bright region loses the original detailed characteristic due to the enhancement of the brightness of the original monitoring image; on the contrary, when the specific target is in the bright area of the original monitoring image, the brightness of the original monitoring image needs to be reduced to eliminate the covering of the detail feature of the specific target due to the excessively high brightness of the bright area, in this case, while the brightness of the original monitoring image is reduced, the dark area of the original monitoring image is further reduced along with the reduction of the brightness of the original monitoring image, and the original detail feature of the dark area is lost due to the reduction of the brightness of the original monitoring image. Particularly, in some application scenarios, when a plurality of specific targets are located in different luminance areas in an original monitoring image at the same time, the luminance adjustment of the original monitoring image cannot meet the requirement for accurately identifying the characteristics of the plurality of specific targets at the same time.
Disclosure of Invention
In view of this, the present disclosure provides an image fusion processing method, an image fusion processing apparatus, an electronic device, and a storage medium, so as to obtain image data including a region of interest subjected to feature enhancement processing, and ensure that after the image data is processed, features in the region of interest are not affected by overall processing of the image data, so as to solve a problem that local and overall feature expressions of an image cannot be considered in an image processing process.
According to an aspect of the embodiments of the present disclosure, there is provided an image fusion processing method, including:
acquiring first original image data;
performing image processing on the first original image data to obtain first image data;
obtaining second original image data containing only a region of interest from the first original image data;
performing feature enhancement processing on the second original image data to obtain second image data;
and carrying out image fusion on the first image data and the second image data to obtain fused image data.
Further, the first raw image data is generated by the acquisition of an image by an image acquisition device.
Further, the second original image data only containing the region of interest is obtained from the first original image data, and a target recognition algorithm is adopted.
Further, the target recognition algorithm comprises at least one of a pedestrian detection algorithm, a vehicle window detection algorithm and a license plate detection algorithm.
Further, before performing the feature enhancement processing on the second original image data, the image fusion processing method further includes:
and performing correction processing on the second original image data.
Further, the value of the correction parameter used in the correction processing is the value of the correction parameter used in the image processing process.
Further, the correction processing includes at least one of black level processing and white balance processing.
Further, the second original image data is subjected to feature enhancement processing to obtain second image data, and the second image data is realized by adopting a local area enhancement network.
Further, the local area enhancement network comprises at least one image convolution processing component connected in sequence, at least one image deconvolution processing component connected in sequence after the at least one image convolution processing component, and an output layer connected after the at least one image deconvolution processing component, the number of image convolution processing components and the number of image deconvolution processing components being equal;
and the second original image data input into the local area enhancement network is processed by the at least one image convolution processing component, the at least one image deconvolution processing component and the output layer in sequence to obtain the second image data.
Further, each image convolution processing component comprises a convolution layer, a batch normalization layer and a first activation layer which are connected in sequence;
each image deconvolution processing component comprises a deconvolution layer, a second activation layer and a splicing layer which are connected in sequence;
the output layer includes a third convolutional layer.
Further, the image fusing the first image data and the second image data to obtain fused image data includes:
and replacing the second image data to the region of interest in the first image data to obtain the fused image data.
Further, after replacing the second image data to the region of interest in the first image data, the image fusion processing method further includes:
and based on a preset fusion weight value, carrying out image fusion on the image data in the preset width range inside the edge of the second image data and the first image data replaced by the image data in the preset width range.
Further, the image fusion processing method further includes:
and encoding the fused image data to obtain RGB data.
Further, the first original image data and the second original image data are both original image file data;
the image processing is image signal processing pipeline processing;
the first image data, the second image data and the fusion image data are YUV image data.
According to another aspect of the embodiments of the present disclosure, there is provided an image fusion processing apparatus including:
a data acquisition module configured to perform acquiring first raw image data;
an image processing module configured to perform image processing on the first raw image data to obtain first image data;
a region-of-interest data acquisition module configured to perform obtaining second raw image data containing only a region of interest from the first raw image data;
the characteristic enhancement module is configured to perform characteristic enhancement processing on the second original image data to obtain second image data;
a fusion module configured to perform image fusion on the first image data and the second image data to obtain fused image data.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the image fusion processing method as any one of the above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein at least one instruction of the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to implement the image fusion processing method as described in any one of the above.
According to the image fusion processing method, the image fusion processing device, the electronic equipment and the storage medium, after the first original image data is obtained, the second original image data about the region of interest is obtained, the second original image data with the enhanced features is obtained by specially performing the feature enhancement processing on the second original image data, and the second image data with the enhanced features is fused with the first image data obtained by performing the image processing, so that the fused image data which is obtained by integrally completing the image processing and partially completing the feature enhancement on the region of interest is obtained, the features in the region of interest are not influenced by the image processing process on the whole image, and the problem that the local and integral feature expression of the image cannot be considered in the image processing process is solved. In the practical monitoring scene, the method is combined with the process of obtaining YUV data from an original monitoring image in the form of original image file data through image signal processing pipeline processing, realizes the feature enhancement processing from the original image file data to the YUV data, which is specially aimed at a specific target, by extracting the specific target of the original image file data and independently performing the feature enhancement processing on the specific target except the original monitoring image, and then fuses the YUV data of the specific target after the feature enhancement and the YUV data obtained after the processing, thereby realizing the end-to-end local feature enhancement of the monitoring image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method of image fusion processing in accordance with one illustrative embodiment;
FIG. 2 is a schematic illustration of a local area enhanced network architecture in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating an application scenario of a method of image fusion processing in accordance with an exemplary embodiment;
FIG. 4 is a schematic view of an image with a window area shown in accordance with an exemplary embodiment;
FIG. 5 is a diagram illustrating fused YUV data in the case of image fusion in accordance with an illustrative embodiment;
FIG. 6 is a block diagram of an image fusion processing apparatus according to an exemplary embodiment;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In an actual scene, image processing algorithms of various image acquisition devices such as a camera and the like are processed aiming at the whole situation, namely the whole shot image is processed, the local area image effect is difficult to be considered, and if the local area image effect is well adjusted, problems may occur in other areas.
In the embodiment of the present disclosure, an existing image processing process is performed on original image data acquired by an image acquisition device, and the feature reinforcement processing means for a local region provided in the embodiment of the present disclosure is combined, so as to solve the problems of poor image effect of the local region and high possibility of losing a specific target feature in the local region.
Fig. 1 is a flowchart illustrating an image fusion processing method according to an exemplary embodiment, and as shown in fig. 1, the image fusion processing method according to the embodiment of the present disclosure mainly includes the following steps:
step 1, acquiring first original image data;
step 2, carrying out image processing on the first original image data to obtain first image data;
step 3, obtaining second original image data only containing the region of interest from the first original image data;
step 4, performing feature enhancement processing on the second original image data to obtain second image data;
and 5, fusing the first image data and the second image data to obtain fused image data.
As can be seen from the above steps, in the image fusion processing method according to the embodiment of the present disclosure, after the first original image data is acquired, the second original image data related to the region of interest is obtained, the feature enhancement processing is specially performed on the second original image data to obtain the feature-enhanced second image data, and the feature-enhanced second image data is fused with the first image data obtained through the image processing, so that the fused image data that the image processing is completed as a whole and the feature enhancement is completed in the local region of interest is obtained, the features in the region of interest are not affected by the image processing process on the whole of the image, and the problem that the local and overall feature expressions of the image cannot be considered in the image processing process is solved. The region Of interest may also be referred to as roi (region Of interest) region.
In some embodiments, the first raw image data is generated by acquisition of an image by an image acquisition device. The image capturing device may be, for example, a camera, a video camera, or the like. Based on the existing image acquisition technology, the data of the original image generated by the image acquisition device for image acquisition is original image file data, i.e. RAW data, which is the most original unprocessed image data and includes all details in the acquired image. Based on this, in some embodiments, the first RAW image data and the second RAW image data are both RAW data, the image processing is an image signal processing Pipeline (ISP-Pipeline) processing, and the first image data, the second image data, and the fused image data are all YUV image data.
In the prior art, in a CMOS (Complementary Metal Oxide Semiconductor) camera and a CCD (Charge coupled Device) camera, image data output by a CMOS sensor and a CCD sensor are almost RAW data, and the RAW data cannot be directly viewed and must be converted into YUV format or RGB format to be supported by conventional image processing software. Also, images in YUV format or RGB format often need to be further converted to JPEG format for storage. The Image Processing process is generally called as ISP (Image Signal Processing), the broad ISP includes JPEG and h.264/265 Image compression, and the narrow ISP includes only a process of converting RAW data into YUV data or RGB data.
The ISP processes the image data like a Pipeline (Pipeline) and may therefore also be referred to as ISP-Pipeline, a typical ISP-Pipeline consists of a series of processing modules connected end to end, the image data being continuously transferred from one module to the next until all the algorithmic processing is completed, and finally the ISP-Pipeline is streamed from the last stage of the Pipeline in YUV or RGB form. Each processing module in the ISP-Pipeline has its own associated parameters, such as a black level parameter of the black level processing module, a white balance parameter of the white balance processing module, and so on.
Based on the above-described embodiments in which the first RAW image data and the second RAW image data are both RAW image file (RAW) data, the image processing is image signal processing Pipeline (ISP-Pipeline) processing, and the first image data, the second image data, and the fused image data are YUV image data, the image signal processing Pipeline processing in some embodiments of the present disclosure refers to an ISP-Pipeline process of obtaining YUV image data from RAW data.
Based on the spirit and principles of the embodiments of the present disclosure, in other embodiments, the image fusion processing method of the embodiments of the present disclosure is also suitable for a scheme for processing other image data formats besides the original image file data, for example, including: a processing scheme of obtaining image data in RGB format by ISP processing from raw image data in YUV format, a processing scheme of obtaining image data in JPEG format by ISP processing from raw image data in RGB format, and the like. In the embodiments of the present disclosure, YUV data obtained from original image file data is described as an example, and it does not mean that the image fusion processing method of the present disclosure is only applicable to an ISP process for obtaining YUV data from original image file data.
In some embodiments, step 3 is implemented using a target recognition algorithm.
In some embodiments, the target recognition algorithm includes at least one of a pedestrian detection algorithm, a window detection algorithm, and a license plate detection algorithm. For example, in a license plate detection scene, a license plate detection algorithm can be adopted to obtain second original image data containing a license plate from the first original image data, and an area of interest in the scene is an area where the license plate in the first original image data is located; in a person identification scene, a pedestrian detection algorithm can be adopted to obtain second original image data containing a person from first original image data, and a region of interest in the scene is a region where the person in the first original image data is located; in an in-vehicle information detection scene (for example, a situation that a passenger wears a safety belt is detected), a window detection algorithm can be adopted to obtain second raw image data containing a window from the first raw image data, a region of interest in the scene is a region where the window is located in the first raw image data, and the region of interest in the scene also comprises an in-vehicle image in the window range.
In some embodiments, one or more than one second original image data may be obtained from the first original image data in step 3 by using one detection algorithm or more than one detection algorithm at the same time, and each second original image data corresponds to a different region of interest in the first original image data, respectively, according to application requirements. For example, in a person recognition scene, in a case where a plurality of persons in the same captured picture need to be detected, a pedestrian detection algorithm is employed to obtain a plurality of second original image data from the first original image data, each of the second original image data corresponding to a different person appearing in the first original image data, respectively. For example, in a scenario where both person identification and license plate identification are compatible, a pedestrian detection algorithm and a license plate detection algorithm may be used to obtain, from the first original image data, second original image data in which an area of interest is a person and second original image data in which an area of interest is a license plate, and for an area of interest that may appear in the first original image data and respectively relates to multiple persons and/or multiple license plates, the second original image data may be multiple, and the scenario where both person identification and license plate identification are compatible is, for example, a scenario where roadside parking and pedestrian detection are monitored. For example, in a scenario where people recognition, license plate recognition and window recognition are combined, a pedestrian detection algorithm, a license plate detection algorithm and a window detection algorithm may be used to obtain second original image data with a region of interest as a person, second original image data with a region of interest as a license plate and second original image data with a region of interest as a window from first original image data, and for a scenario where multiple persons and/or multiple license plates and/or multiple windows are respectively involved that may appear in the first original image data, the second original image data may be multiple, and the scenario where people recognition, license plate recognition and window recognition are combined, such as a monitoring scenario where traffic regulations are involved in intersection monitoring pedestrians, vehicles and passengers in the vehicle, for example, whether pedestrians and vehicles comply with traffic signals, whether passengers in the vehicle have unbuckled safety belts, and whether passengers in the vehicle have unbuckled safety belts, Driving a car to make a call, etc.
In the process of performing image signal processing pipeline processing on original image file data, correction of various parameters is involved, and in the embodiment of the present disclosure, second original image data is obtained from first original image data. In this case, before step 4, the image fusion processing method according to the embodiment of the present disclosure further includes:
the second original image data is subjected to correction processing.
In some embodiments, the value of the correction parameter used for performing the correction processing on the second original image data is the value of the correction parameter used in the image processing process. For example, the value of the correction parameter used for performing the correction processing on the second original image data is the value of the correction parameter used for performing the image signal processing pipeline processing on the first original image data.
Since the feature enhancement processing performed on the second raw image data in step 4 is not exactly the same as the image processing performed on the first raw image data in step 2, in some embodiments, the correction processing performed on the second raw image data only uses a portion of the correction processing used in the image signal processing pipeline. In this case, in some embodiments, the correction processing performed on the second original image data includes at least one of black level processing and white balance processing. Preferably, the correction processing performed on the second original image data includes black level processing and white balance processing. The value of the black level parameter for performing black level processing on the second original image data adopts the value of the black level parameter used in the image processing process, namely the value of the black level parameter in the image signal processing pipeline processing; the value of the white balance parameter for performing the white balance processing on the second original image data adopts the value of the white balance parameter used in the image processing process, namely the value of the white balance parameter in the image signal processing pipeline processing.
To enable end-to-end image processing, in some embodiments, step 4 is implemented using a local area enhanced network. Wherein, the local area enhanced network is a neural network. Fig. 2 is a schematic diagram illustrating a local area enhancement network structure according to an exemplary embodiment, and as shown in fig. 2, in some embodiments, the local area enhancement network includes at least one image convolution processing component 201, at least one image deconvolution processing component 202, and an output layer 203. Wherein, at least one image convolution processing component 201 is connected in sequence, at least one image deconvolution processing component 202 is connected in sequence after the at least one image convolution processing component 201 connected in sequence, and the output layer 203 is connected after the at least one image deconvolution processing component 202, wherein, the number of the image convolution processing components 201 is equal to the number of the image deconvolution processing components 202. The second original image data input into the local area enhancement network is processed by at least one image convolution processing component 201, at least one image deconvolution processing component 202 and an output layer 203 in sequence to obtain second image data.
In some embodiments, the number of image convolution processing components 201 is four and the number of image deconvolution processing components 202 is four. The four image convolution processing components 201 and the four image deconvolution processing components 202 are adopted to comprehensively consider the number of parameters of the local area enhancement network and the image effect. If the number of the image convolution processing components 201 and the number of the image deconvolution processing components 202 are less than four, the image effect obtained by the local area enhancement network may be deteriorated, and if the number of the image convolution processing components 201 and the number of the image deconvolution processing components 202 are greater than four, the number of parameters of the local area enhancement network may be increased, so that the network performance of the device side operating the local area enhancement network is affected, and the time consumption of modules of the device side is increased. However, the protection scope of the present disclosure is not limited to the case of four image convolution processing components 201 and four image deconvolution processing components 202, the number of the image convolution processing components 201 and the image deconvolution processing components 202 is greater than four, such as six, eight, ten, and the like, and the technical effect that the deeper the local area enhancement network is, the greater the number of the image convolution processing components 201 and the image deconvolution processing components 202 is, the better the output image effect can be obtained. The architecture adopting the four image convolution processing components 201 and the four image deconvolution processing components 202 has the advantages that: not only can the image output effect of the network be ensured, but also the network parameter quantity can be reduced.
As shown in fig. 2, each image convolution processing component 201 includes a convolution layer, a batch normalization layer (BN layer), and a first active layer, which are connected in sequence, each image deconvolution processing component 202 includes a deconvolution layer, a second active layer, and a stitching layer (concat layer), which are connected in sequence, and the output layer 203 includes a third convolution layer.
In the disclosed embodiment, the local area augmentation network is a neural network that has completed training.
In some embodiments, for second raw image data containing a different object, a trained local area enhancement network that performs feature enhancement for the different object is employed. For example, for second original image data containing a person, a trained local region enhancement network for feature enhancement of the person is adopted, and the training of the local region enhancement network is performed by using a training sample set of the person; aiming at second original image data containing the license plate, adopting a local area enhancement network which is trained and performs characteristic enhancement aiming at the license plate, wherein the training of the local area enhancement network is performed by utilizing a training sample set of the license plate; and for second original image data containing the car window, adopting a trained local area enhancement network for carrying out feature enhancement on the car window, wherein the training of the local area enhancement network is carried out by utilizing a training sample set of the car window.
In some embodiments, step 5 comprises: and replacing the second image data to the region of interest in the first image data to obtain fused image data. In this way, the obtained fusion image data includes the region of interest and all information contents except the region of interest, and the completeness of the information contents of the finally obtained image and the feature enhancement of the region of interest are ensured.
In some embodiments, in order to eliminate the problem that the jump between the region of interest with enhanced features in the fused image data and other regions is too large on the display effect determined by the display parameters and the edge of the region of interest is too obvious, in some embodiments, after replacing the second image data with the region of interest in the first image data, the image fusion processing method of the embodiment of the present disclosure may further include:
and based on a preset fusion weight value, carrying out image fusion on the image data in the preset width range inside the edge of the second image data and the first image data replaced by the image data in the preset width range.
In some embodiments, the image fusing the image data in the preset width range inside the edge of the second image data and the first image data replaced by the image data in the preset width range based on the preset fusion weight value may include: and fusing the display parameters of the image data in the preset width range inside the edge of the image data of the second area and the display parameters of the first image data replaced by the image data in the preset width range through the fusion weight value to obtain fused display parameters, and applying the fused display parameters to the image data in the preset width range inside the edge of the second image data.
The following formula for image fusion may be employed in some embodiments:
Y’(x,y)=Y 1 (x,y)*w+Y 2 (x,y)*(1-w)
wherein (x, Y) is the coordinate of the pixel point in the preset width range inside the edge of the second image data in the image of the first original image data, x is the abscissa, Y is the ordinate, and Y is the abscissa 1 (x, Y) is a display parameter of a pixel point at coordinates (x, Y) in the first image data, and Y is 2 (x, Y) is a display parameter of a pixel point with coordinates (x, Y) in the second image data, Y' (x, Y) is a display parameter after image fusion of a pixel point with coordinates (x, Y), w is a fusion weight value, and in some embodiments, the value range of w is [0,1 []Preferably, w has a value of 0.5.
Generally, the sum of the fusion weight values of the two images is 1, and if the sum is not 1, the brightness and color of the fused image may be different from those of the original image.
In some embodiments, in order to facilitate storage of the image data obtained after the processing in step 5, the image fusion processing method according to the embodiment of the present disclosure further includes:
and encoding the fused image data to obtain RGB data.
By adopting the image fusion processing method of the embodiment of the disclosure, the end-to-end processing of the image data obtained by monitoring aiming at the feature enhancement of the region of interest can be realized, and when the image fusion processing method is applied to various monitoring scenes, the stored monitoring image data can be realized as the feature enhanced image data.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 3 is a flowchart illustrating an application scenario of an image fusion processing method according to an exemplary embodiment, where the application scenario is an embodiment of feature enhancement for a vehicle window region, and as shown in fig. 3, the embodiment includes the following steps.
Step 301, obtaining global original image file data collected by an image collecting device.
In some embodiments, an image capture device, such as a camera, video camera, or the like, may be used for monitoring the scene. The image acquisition device acquires image data of a plurality of image files, wherein the image data is acquired by an image sensor in the image acquisition device. The global original image file data comprises the window area and the image content of the area outside the window area.
Wherein the global original image file data corresponds to the first original image data.
In some embodiments, the RAW data is in a pattern format arranged in a Bayer (Bayer) array. The bayer array is one of the main technologies for realizing the capturing of color images by CCD or CMOS sensors. For the RAW data format and the bell array, reference can be made to the related documents in the prior art, and the details are not repeated here.
Step 302, performing image signal processing pipeline processing on the global original image file data to obtain global YUV data.
The image signal processing Pipeline, namely the ISP-Pipeline processing, can be realized by adopting an image processing chip carried by the image acquisition device in some embodiments.
YUV is a color coding method, and is often used in various video processing components, and further description of YUV can be found in related documents in the prior art, and is not repeated here.
Wherein the global YUV data corresponds to the first image data.
And step 303, obtaining window original image file data only containing the window area from the global original image file data.
The window area is the area of interest of the above description, and in the application scenario of this embodiment, the window area is detected and intercepted by using a window detection algorithm. Among them, the window original image file data, i.e., the window RAW data, corresponds to the second original image data in the above description.
Fig. 4 is a schematic diagram of an image with a window area according to an exemplary embodiment, in which the area indicated by reference numeral 401 is a global RAW data image, which may be a full-frame image captured by an image capture device, and the area indicated by reference numeral 402 is the window area, and in the embodiment shown in fig. 4, the driving condition of a person in the vehicle, such as whether a seat belt is fastened or not, can be determined from the displayed content of the window area 402. As shown in fig. 4, in step 303, coordinate information (x, y, w, h) of the window area in the global RAW data image is obtained by using a window detection algorithm, where (x, y) is a coordinate of a center point of the window area in the global RAW data, and (w, h) is a width and a height of the window area, and then, the window RAW data of the window area is intercepted from the global RAW data according to the obtained coordinate information (x, y, w, h) of the window area.
And step 304, preprocessing the original image file data of the car window.
Wherein the preprocessing includes black level processing and white balance processing. The black level parameter and the white balance parameter in the black level processing and the white balance processing may be the black level parameter and the white balance parameter in the ISP-Pipeline processing in step 302, which can ensure that the color of the window RAW data after the black level processing and the white balance processing is consistent with the color of the global RAW data. Based on this, step 304 may further include steps 3041 and 3042 as follows.
Step 3041, acquiring a black level parameter and a white balance parameter from an image processing chip that performs ISP-Pipeline processing on original image file data.
And step 3042, performing black level processing and white balance processing on the original image file data of the vehicle window by using the obtained values of the black level parameter and the white balance parameter.
The formula of black level processing is as follows:
raw’(x,y)=raw(x,y)-blc
wherein, (x, y) is a coordinate of a pixel point in an image of the global RAW data, x is an abscissa, y is an ordinate, RAW (x, y) is a black level value before black level processing of the pixel point at the coordinate (x, y), RAW' (x, y) is a black level value after black level processing of the pixel point at the coordinate (x, y), and blc is a black level parameter.
The formula of the white balance processing is as follows:
r’(x,y)=r(x,y)*r_gain
b’(x,y)=b(x,y)*b_gain
wherein, r (x, y) is the red value before the white balance processing of the pixel point with the coordinate (x, y), r '(x, y) is the red value after the white balance processing of the pixel point with the coordinate (x, y), r _ gain is the red gain value in the white balance parameter, b (x, y) is the blue value before the white balance processing of the pixel point with the coordinate (x, y), b' (x, y) is the blue value after the white balance processing of the pixel point with the coordinate (x, y), and b _ gain is the blue gain value in the white balance parameter.
And 305, inputting the original image file data of the vehicle window subjected to black level processing and white balance processing into a local area enhancement network, and obtaining YUV data of the vehicle window through the local area enhancement network.
The structure of the local area enhanced network can be seen in fig. 2. The convolutional layer, BN (Batch Normalization), active layer, deconvolution layer, and concat layer are implemented by using the existing neural network technology, and are not described herein again.
And the acquired vehicle window YUV data is the vehicle window YUV data with the enhanced characteristics.
And step 306, carrying out image fusion on the global YUV data and the vehicle window YUV data to obtain fused YUV data.
The fused YUV data corresponds to the fused image data.
Fig. 5 is a diagram illustrating fused YUV data in the case of image fusion according to an exemplary embodiment. As shown in fig. 5, in fused YUV data 500, a transition region 502 is provided at the boundary of vehicle window YUV data 501, an inner region surrounded by the transition region 502 is a non-transition region 503, the transition region 502 and the non-transition region 503 constitute the region of the vehicle window YUV data 501, global YUV data 504 is provided outside the transition region 502, and the transition region 502 is provided within a preset width range inside the edge of the vehicle window YUV data 501. As shown in connection with fig. 5, step 306 may include the following steps 3061 through 3062.
Step 3061, replacing the vehicle window YUV data to an area of interest in the global YUV data.
After step 3061, fused YUV data is obtained. In the fused YUV data at this time, because the vehicle window YUV data is feature-enhanced and the global YUV data is not feature-enhanced, a problem of sudden jump in the display effect due to a difference in the display parameters between the vehicle window YUV data and the global YUV data exists at a boundary between the vehicle window YUV data and the global YUV data, resulting in an obvious poor effect of the vehicle window YUV data boundary, in this embodiment, the problem of sudden jump in the display effect of the boundary of the vehicle window YUV data is solved by using step 3062. In some embodiments, step 3062 may not be performed without the need to eliminate the display effect of such sudden jumps.
Step 3062, based on the preset fusion weight value, image fusion is carried out on the image data in the preset width range of the inner side of the vehicle window YUV data edge and the global YUV data replaced by the image data in the preset width range.
As shown in fig. 5, the inside of the edge of the window YUV data 501 refers to a side where the edge of the window YUV data 501 extends toward the center of the window YUV data 501, and therefore the transition region 502 is a part of the covered region of the window YUV data 501.
In the context of the transition region 502, step 3062 includes: in the transition region 602, image fusion is performed on the first region YUV data derived from the global YUV data 504 and the second region YUV data derived from the window YUV data 501 based on a preset fusion weight value, so as to obtain fusion transition YUV data in the fusion YUV data 500, wherein the first region YUV data is the global YUV data 504 replaced by the window YUV data 501 in the transition region 502. The YUV data in the first region and the YUV data in the second region are only named for distinguishing YUV data sources.
The image fusion formula is as follows:
Y’(x,y)=Y 1 (x,y)*w+Y 2 (x,y)*(1-w)
wherein (x, Y) is the coordinate of the pixel point in the image of the global RAW data, x is the abscissa, Y is the ordinate, Y is the Y 1 (x, Y) is the display parameter of the pixel point with the coordinate (x, Y) in the global YUV data 503, i.e. the display parameter of the pixel point with the coordinate (x, Y) in the first area YUV data, Y 2 (x, Y) is a display parameter of a pixel point with coordinates (x, Y) in the vehicle window YUV data 502, that is, a display parameter of a pixel point with coordinates (x, Y) in the second area YUV data, Y' (x, Y) is a display parameter of a pixel point with coordinates (x, Y) in the fusion transition YUV data, w is a fusion weight value, and in some embodiments, the value range of w is [0,1 [ ]]Preferably, w has a value of 0.5. In a preferred embodiment, the display parameter is a luminance parameter.
And 307, encoding the fused YUV data to obtain RGB data and storing the RGB data.
Fig. 6 is a schematic structural diagram illustrating an image fusion processing apparatus according to an exemplary embodiment, and as shown in fig. 6, the image fusion processing apparatus includes a data acquisition module 601, an image processing module 602, a region-of-interest data acquisition module 603, a feature enhancement module 604, and a fusion module 605.
A data acquisition module 601 configured to perform acquiring first raw image data.
An image processing module 602 configured to perform image processing on the first raw image data to obtain first image data.
A region-of-interest data acquisition module 603 configured to perform obtaining second raw image data containing only the region of interest from the first raw image data.
And a feature enhancement module 604 configured to perform feature enhancement processing on the second original image data to obtain second image data.
A fusion module 605 configured to perform image fusion on the first image data and the second image data to obtain fused image data.
With regard to the image fusion processing apparatus in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment relating to the image fusion processing method, and will not be described in detail here.
It should be noted that: in practical applications, the above function distribution may be completed by different function modules according to needs, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the above described functions.
According to the image fusion processing method and device, after the first original image data is acquired, second original image data about the region of interest is acquired, feature enhancement processing is specially performed on the second original image data to acquire feature enhanced second image data, and the feature enhanced second image data is fused with the first image data acquired through the image processing, so that fused image data which is subjected to the overall image processing and the feature enhancement on the local region of interest is acquired, the features in the region of interest are not influenced by the overall image processing process of the image, and the problem that the local and overall feature expression of the image cannot be considered in the image processing process is solved. In the practical monitoring scene, the method is combined with the process of obtaining YUV data from an original monitoring image in the form of original image file data through image signal processing pipeline processing, realizes the feature enhancement processing from the original image file data to the YUV data, which is specially aimed at a specific target, by extracting the specific target of the original image file data and independently performing the feature enhancement processing on the specific target except the original monitoring image, and then fuses the YUV data of the specific target after the feature enhancement and the YUV data obtained after the processing, thereby realizing the end-to-end local feature enhancement of the monitoring image.
In the traditional image signal processing pipeline processing, because image processing modules such as a 3D noise reduction module and the like lose image details of part of local area targets and cannot adjust the local areas, the image effects of areas such as faces, license plates and pedestrians in windows are poor. According to the image fusion processing method and device, the local area information in the original image file data is utilized and combined with the neural network, and the image effect of the local area is optimized. The used local area enhancement network can simultaneously realize end-to-end image processing from original image file data to YUV data, has the characteristics of noise reduction, brightness enhancement, contrast enhancement and the like, can learn and optimize a large amount of original image file data and high-definition RGB color images, and can enable the feature enhancement effect to be better and better along with the increase of samples. The original image file data can obtain a high-definition RGB color image after being processed by an image signal processing pipeline. The local area enhancement network is a process of learning the change of original image file data to a high-definition RGB color image. In some embodiments, when training the local area enhanced network, other image capturing devices are first used to acquire original image file data in each scene and high-definition RGB color images output by the other image capturing devices, and the original image file data and the high-definition RGB color images acquired by the other image capturing devices are used as training data to train parameters of the local area enhanced network. Wherein the other image capturing device is different from the image capturing device executing the step 2 and the step 302, and the high-definition RGB color image output by the other image capturing device is feature-enhanced compared with the first image data output by the image capturing device executing the step 2 and the step 302.
The embodiment of the disclosure does not relate to threshold judgment, has no RGB related statistical information, fully uses the most original RAW data information, and has excellent adaptability and robustness.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure. In some embodiments, the electronic device is a server. In some embodiments, the electronic device is a terminal device. The electronic device 700 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 701 and one or more memories 702, where the memory 702 stores at least one program code, and the at least one program code is loaded and executed by the processors 701 to implement the image fusion Processing method provided in the embodiments. Of course, the electronic device 700 may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the electronic device 700 may further include other components for implementing device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium including at least one instruction, such as a memory including at least one instruction, is also provided, the at least one instruction being executable by a processor in a computer device to perform the image fusion processing method in the above-described embodiments.
Alternatively, the computer-readable storage medium may be a non-transitory computer-readable storage medium, and the non-transitory computer-readable storage medium may include a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like, for example.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. An image fusion processing method, comprising:
acquiring first original image data;
performing image processing on the first original image data to obtain first image data;
obtaining second original image data containing only a region of interest from the first original image data;
correcting the second original image data, wherein the value of a correction parameter used in the correction processing adopts the value of the correction parameter used in the image processing process;
performing feature enhancement processing on the second original image data by adopting a local area enhancement network to obtain second image data;
performing image fusion on the first image data and the second image data to obtain fused image data;
wherein the local area enhancement network comprises at least one image convolution processing component connected in sequence, at least one image deconvolution processing component connected in sequence after the at least one image convolution processing component, and an output layer connected after the at least one image deconvolution processing component, the number of image convolution processing components and the number of image deconvolution processing components being equal; and the second original image data input into the local area enhancement network is processed by the at least one image convolution processing component, the at least one image deconvolution processing component and the output layer in sequence to obtain the second image data.
2. The image fusion processing method according to claim 1, characterized in that:
the first raw image data is generated by the acquisition of an image by an image acquisition device.
3. The image fusion processing method according to claim 1, characterized in that:
and obtaining second original image data only containing the region of interest from the first original image data, and realizing by adopting a target identification algorithm.
4. The image fusion processing method according to claim 3, characterized in that:
the target recognition algorithm comprises at least one of a pedestrian detection algorithm, a vehicle window detection algorithm and a license plate detection algorithm.
5. The image fusion processing method according to claim 1, characterized in that:
the correction processing includes at least one of black level processing and white balance processing.
6. The image fusion processing method according to claim 1, characterized in that:
each image convolution processing assembly comprises a convolution layer, a batch normalization layer and a first activation layer which are connected in sequence;
each image deconvolution processing component comprises a deconvolution layer, a second activation layer and a splicing layer which are connected in sequence;
the output layer includes a third convolutional layer.
7. The image fusion processing method according to claim 1, characterized in that: the image fusion of the first image data and the second image data to obtain fused image data includes:
and replacing the second image data to the region of interest in the first image data to obtain the fused image data.
8. The image fusion processing method according to claim 7, wherein after the replacing the second image data to the region of interest in the first image data, the image fusion processing method further comprises:
and based on a preset fusion weight value, carrying out image fusion on the image data in the preset width range inside the edge of the second image data and the first image data replaced by the image data in the preset width range.
9. The image fusion processing method according to claim 1, further comprising:
and encoding the fused image data to obtain RGB data.
10. The image fusion processing method according to claim 1, characterized in that:
the first original image data and the second original image data are both original image file data;
the image processing is image signal processing pipeline processing;
the first image data, the second image data and the fusion image data are YUV image data.
11. An image fusion processing apparatus, characterized by comprising:
a data acquisition module configured to perform acquiring first raw image data;
an image processing module configured to perform image processing on the first raw image data to obtain first image data;
a region-of-interest data acquisition module configured to perform obtaining second raw image data containing only a region of interest from the first raw image data;
a correction processing module configured to perform correction processing on the second original image data, wherein a value of a correction parameter used in the correction processing is a value of a correction parameter used in the image processing;
the characteristic enhancement module is configured to perform characteristic enhancement processing on the second original image data by adopting a local area enhancement network to obtain second image data;
a fusion module configured to perform image fusion on the first image data and the second image data to obtain fused image data;
wherein the local area enhancement network comprises at least one image convolution processing component connected in sequence, at least one image deconvolution processing component connected in sequence after the at least one image convolution processing component, and an output layer connected after the at least one image deconvolution processing component, the number of image convolution processing components and the number of image deconvolution processing components being equal; and the second original image data input into the local area enhancement network is processed by the at least one image convolution processing component, the at least one image deconvolution processing component and the output layer in sequence to obtain the second image data.
12. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the executable instructions to implement the image fusion processing method of any one of claims 1 to 10.
13. A computer-readable storage medium, wherein at least one instruction of the computer-readable storage medium, when executed by a processor of an electronic device, enables the electronic device to implement the image fusion processing method of any one of claims 1 to 10.
CN202210571779.7A 2022-05-25 2022-05-25 Image fusion processing method and device, electronic equipment and storage medium Active CN114663842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210571779.7A CN114663842B (en) 2022-05-25 2022-05-25 Image fusion processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210571779.7A CN114663842B (en) 2022-05-25 2022-05-25 Image fusion processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114663842A CN114663842A (en) 2022-06-24
CN114663842B true CN114663842B (en) 2022-09-09

Family

ID=82038302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210571779.7A Active CN114663842B (en) 2022-05-25 2022-05-25 Image fusion processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663842B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021196401A1 (en) * 2020-03-31 2021-10-07 北京市商汤科技开发有限公司 Image reconstruction method and apparatus, electronic device and storage medium
CN114299088A (en) * 2021-12-27 2022-04-08 北京达佳互联信息技术有限公司 Image processing method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101201620B1 (en) * 2010-01-26 2012-11-14 삼성전자주식회사 Device and method for enhancing image in wireless terminal
CN112241735A (en) * 2019-07-18 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing method, device and system
CN112241935B (en) * 2019-07-18 2023-05-26 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN111754440B (en) * 2020-06-29 2023-05-05 苏州科达科技股份有限公司 License plate image enhancement method, system, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021196401A1 (en) * 2020-03-31 2021-10-07 北京市商汤科技开发有限公司 Image reconstruction method and apparatus, electronic device and storage medium
CN114299088A (en) * 2021-12-27 2022-04-08 北京达佳互联信息技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN114663842A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
US8036427B2 (en) Vehicle and road sign recognition device
US8811733B2 (en) Method of chromatic classification of pixels and method of adaptive enhancement of a color image
CN106991707B (en) Traffic signal lamp image strengthening method and device based on day and night imaging characteristics
KR20120008519A (en) Monitoring apparatus
KR101561626B1 (en) The Vehicle Black Box Capable of Real-Time Recognizing a License Number Plate for Moving Vehicle
CN110490187A (en) Car license recognition equipment and method
CN109729256A (en) The control method and device of double photographic devices in vehicle
CN113627226A (en) Object detection in scenes with wide range of light intensities using neural networks
CN111723805B (en) Method and related device for identifying foreground region of signal lamp
JP7236857B2 (en) Image processing device and image processing method
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN112241982A (en) Image processing method and device and machine-readable storage medium
US10417518B2 (en) Vehicle camera system
CN113408380A (en) Video image adjusting method, device and storage medium
CN114663842B (en) Image fusion processing method and device, electronic equipment and storage medium
JP4740038B2 (en) Image processing device
JP4861922B2 (en) Car color judgment device
TWI630818B (en) Dynamic image feature enhancement method and system
KR100801989B1 (en) Recognition system for registration number plate and pre-processor and method therefor
JP6825299B2 (en) Information processing equipment, information processing methods and programs
JP2008042227A (en) Imaging apparatus
JP6874315B2 (en) Information processing equipment, information processing methods and programs
CN112017128A (en) Image self-adaptive defogging method
KR101697211B1 (en) Image recognition apparatus, method and system for providing parking space information based on interaction and image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant