CN106651822A - Picture recovery method and apparatus - Google Patents

Picture recovery method and apparatus Download PDF

Info

Publication number
CN106651822A
CN106651822A CN201611180932.4A CN201611180932A CN106651822A CN 106651822 A CN106651822 A CN 106651822A CN 201611180932 A CN201611180932 A CN 201611180932A CN 106651822 A CN106651822 A CN 106651822A
Authority
CN
China
Prior art keywords
image
pixel
impact point
photographic head
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611180932.4A
Other languages
Chinese (zh)
Inventor
闫明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201611180932.4A priority Critical patent/CN106651822A/en
Publication of CN106651822A publication Critical patent/CN106651822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a picture recovery method. The method comprises the steps of obtaining a first image shot by a first camera, and obtaining a second image shot by a second camera; based on the first image and the second image, calculating a vertical distance from each target point of a target object to a straight line on which the first camera and the second camera are located; calculating transmissivity corresponding to each target point according to the vertical distance corresponding to each target point; calculating atmospheric light intensity according to the first image and the second image; and based on the first image and the second image, determining a recovered picture according to the transmissivity corresponding to each target point and the atmospheric light intensity. The invention furthermore provides a picture recovery apparatus. According to the method and the apparatus, a foggy picture is processed according to a foggy picture imaging model, so that the quality of the recovered picture is improved.

Description

Picture restored method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of picture restored method and device.
Background technology
When picture is shot in the greasy weather, picture is very unintelligible, and picture can be partially white.Image procossing is typically utilized in prior art Contrast stretching method, histogram equalizing method in software etc., restore to the picture.But the picture and nothing after restoring Mist picture differs greatly, and have impact on the quality of the picture after restoring.
The content of the invention
In view of the foregoing, it is necessary to which a kind of picture restored method and device are provided, can be according to mist figure imaging model to having The picture of mist is processed, and improves the quality for restoring picture.
A kind of picture restored method, in being applied to electronic equipment, methods described includes:
The first image of the first photographic head shooting is obtained, and obtains the second image that second camera shoots;
Based on described first image and second image, each impact point of target object is calculated to the described first shooting Head and the vertical dimension of second camera place straight line;
According to the corresponding vertical dimension of each impact point, the corresponding absorbance of each impact point is calculated;
According to described first image and second image, air light intensity is calculated;
Based on described first image and second image, according to the corresponding absorbance of each impact point and the atmosphere light It is strong to determine recovery picture.
According to the preferred embodiment of the present invention, each impact point one the first pixel of correspondence, institute in described first image State the vertical dimension for calculating each impact point of target object to first photographic head and second camera place straight line Including:
Obtain focal length, the focal length of the second camera and first photographic head and described of first photographic head The distance of two photographic head;
Determine each impact point corresponding first pixel in described first image, and the correspondence on second image The second pixel;
For each impact point, the first distance of first pixel and the first intersection point, and calculating described second are calculated The second distance of pixel and the second intersection point, wherein optical axis and the first image that first intersection point is first photographic head are put down The intersection point in face, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation;
For each impact point, using the focal length of first photographic head, the focal length of the second camera, first away from With a distance from, second distance and first photographic head with the second camera, calculate that each impact point is corresponding described to hang down Straight distance.
According to the preferred embodiment of the present invention, the corresponding absorbance of each impact point of the calculating includes:
Each impact point is calculated according to the corresponding vertical dimension of each impact point and atmospheric scattering coefficient corresponding Rate is penetrated, the corresponding absorbance of each impact point is:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) table Show the corresponding vertical dimension of the pixel.
According to the preferred embodiment of the present invention, the calculating air light intensity includes:
Obtain the gray scale of each pixel in the gray value and second image of each pixel in described first image Value;
The gray value of all pixels point in described first image is ranked up from big to small, and presetting digit capacity before choosing Gray value, and the gray value of all pixels point in second image is ranked up from big to small, and presetting digit capacity before choosing Gray value;
Institute is calculated according to gray value selected in gray value and second image selected in described first image State air light intensity.
According to the preferred embodiment of the present invention, the expression formula of the recovery picture is:
Wherein t0For parameter preset, I (x, y) expression pixel coordinates are the pixel of (x, y) in described first image or institute The pixel value in the second image or in the 3rd image is stated, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity, 3rd image carries out merging obtaining to described first image and second image.
A kind of picture restoring means, described device includes:
Acquisition module, for obtaining the first image that the first photographic head shoots, and obtain second camera shooting second Image;
Computing module, for based on described first image and second image, calculating each impact point of target object To the vertical dimension of first photographic head and second camera place straight line;
The computing module is additionally operable to according to the corresponding vertical dimension of each impact point, calculates each impact point correspondence Absorbance;
The computing module is additionally operable to according to described first image and second image, calculates air light intensity;
Determining module, for based on described first image and second image, according to the corresponding transmission of each impact point Rate and the air light intensity determine restores picture.
According to the preferred embodiment of the present invention, each impact point one the first pixel of correspondence, institute in described first image Each impact point that computing module is stated for calculating target object is located straight to first photographic head and the second camera The vertical dimension of line includes:
Obtain focal length, the focal length of the second camera and first photographic head and described of first photographic head The distance of two photographic head;
Determine each impact point corresponding first pixel in described first image, and the correspondence on second image The second pixel;
For each impact point, the first distance of first pixel and the first intersection point, and calculating described second are calculated The second distance of pixel and the second intersection point, wherein optical axis and the first image that first intersection point is first photographic head are put down The intersection point in face, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation;
For each impact point, using the focal length of first photographic head, the focal length of the second camera, first away from With a distance from, second distance and first photographic head with the second camera, calculate that each impact point is corresponding described to hang down Straight distance.
According to the preferred embodiment of the present invention, the computing module is additionally operable to calculate the corresponding absorbance bag of each impact point Include:
Each impact point is calculated according to the corresponding vertical dimension of each impact point and atmospheric scattering coefficient corresponding Rate is penetrated, the corresponding absorbance of each impact point is:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) table Show the corresponding vertical dimension of the pixel.
According to the preferred embodiment of the present invention, the computing module is additionally operable to calculating air light intensity to be included:
Obtain the gray scale of each pixel in the gray value and second image of each pixel in described first image Value;
The gray value of all pixels point in described first image is ranked up from big to small, and presetting digit capacity before choosing Gray value, and the gray value of all pixels point in second image is ranked up from big to small, and presetting digit capacity before choosing Gray value;
Institute is calculated according to gray value selected in gray value and second image selected in described first image State air light intensity.
According to the preferred embodiment of the present invention, the expression formula of the recovery picture is:
Wherein t0For parameter preset, I (x, y) expression pixel coordinates are the pixel of (x, y) in described first image or institute The second image or the pixel value in the 3rd image are stated, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity, institute State the 3rd image carries out merging obtaining to described first image and second image.
As can be seen from the above technical solutions, the present invention obtains the first image that the first photographic head shoots, and acquisition second The second image that photographic head shoots, based on described first image and second image, calculates each impact point of target object To the vertical dimension of first photographic head and second camera place straight line, described hang down according to each impact point is corresponding Straight distance, calculates the corresponding absorbance of each impact point, according to described first image and second image, calculates atmosphere light By force, it is true according to the corresponding absorbance of each impact point and the air light intensity based on described first image and second image Determine restored map.The present invention can be processed the picture for having mist according to mist figure imaging model, improve the quality for restoring picture.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Inventive embodiment, for those of ordinary skill in the art, on the premise of not paying creative work, can be with basis The accompanying drawing of offer obtains other accompanying drawings.
Fig. 1 is the flow chart of the preferred embodiment of picture restored method of the present invention.
Fig. 2 is that impact point is illustrated to the computation model of the first photographic head and the vertical dimension of second camera place straight line Figure.
Fig. 3 is mist figure imaging model schematic diagram.
Fig. 4 is the functional block diagram of the preferred embodiment of picture restoring means of the present invention.
Fig. 5 is the structural representation of the electronic equipment of the preferred embodiment that the present invention realizes picture restored method.
Main element symbol description
Electronic equipment 1
Memorizer 12
Processor 13
Display 14
First photographic head 15
Second camera 16
Picture restoring means 11
Acquisition module 100
Computing module 101
Setup module 102
Determining module 103
Output module 104
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent from, it is below in conjunction with the accompanying drawings and concrete real The present invention is further detailed explanation to apply mode.
As shown in figure 1, being the flow chart of the preferred embodiment of picture restored method of the present invention.According to different demands, should The order of step can change in flow chart, and some steps can be omitted.
Preferably, picture restored method of the invention can be applied in multiple electronic equipments.The electronic equipment is one Planting can include according to the instruction being previously set or store, the equipment that numerical computations and/or information processing are carried out automatically, its hardware But be not limited to microprocessor, special IC (Application Specific Integrated Circuit, ASIC), Programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit (Digital Signal Processor, DSP), embedded device etc..
The electronic equipment can also be that any one can carry out the electronic product of man-machine interaction with user, for example, personal Computer, panel computer, smart mobile phone, personal digital assistant (Personal Digital Assistant, PDA), game machine, IPTV (Internet Protocol Television, IPTV), intellectual Wearable etc..
Network residing for the electronic equipment include but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, it is virtual specially With network (Virtual Private Network, VPN) etc..
S10, the electronic equipment obtains the first image that the first photographic head shoots, and obtain that second camera shoots the Two images.
In at least one embodiment, the electronic equipment includes first photographic head and the second camera.Institute The structure for stating the first photographic head and the second camera is typically made up of camera lens, sensor and peripheral circuit.Described first takes the photograph As head can be that black and white photographic head is configured with black and white sensor or colour imagery shot is configured with color sensor.Second shooting Head can be that black and white photographic head is configured with black and white sensor or colour imagery shot is configured with color sensor.
Further, the electronic equipment also included before described first image and second image is obtained:
Receive the enabled instruction of camera.
Specifically, the action of the application icon of user's click camera triggers the enabled instruction of the camera, the electricity Sub- equipment receives the enabled instruction of camera.
S11, the electronic equipment calculates each target of target object based on described first image and second image Point arrives the vertical dimension of first photographic head and second camera place straight line.
In at least one embodiment, the target object includes multiple impact points, and each impact point is in first figure As one the first pixel of upper correspondence, one the second pixel of correspondence on second image.Therefore, an impact point correspondence The vertical dimension, that is, the corresponding vertical dimension of first pixel, the corresponding institute of second pixel State vertical dimension.
As shown in Fig. 2 impact point is to the first photographic head and the computation model of the vertical dimension of second camera place straight line Schematic diagram.The electronic equipment calculates the corresponding vertical dimension of each impact point using the computation model shown in Fig. 2 to be included:
(1) the focal length f1 of the first photographic head, the focal length f2 of the second camera and first photographic head are obtained With the second camera apart from T.
(2) each impact point corresponding first pixel in described first image is determined, and on second image Corresponding second pixel.
In at least one embodiment, the electronic equipment is determined described using feature extracting method and image matching method First pixel and second pixel.The feature extracting method can be based on the feature extraction of gray scale, based on color Feature extraction, feature extraction based on shape etc..Described image matching process can be template matching method, characteristic matching method Etc., the feature extracting method and described image matching process are prior arts, and the present invention does not do any restriction.
(3) for each impact point, the first of first pixel and the first intersection point is calculated apart from X1, and calculate described The second distance X2 of the second pixel and the second intersection point.Wherein described first intersection point is the optical axis and first of first photographic head The intersection point of the plane of delineation, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation.
(4) for each impact point, based on the computation model, using the focal length f1 of first photographic head, described The focal length f2 of two photographic head, first apart from X1, second distance X2 and first photographic head and the second camera distance T, calculates corresponding vertical dimension Z=T*f1*f2/ (X1*f2+X2*f1) of each impact point.
S12, the electronic equipment calculates each impact point corresponding according to the corresponding vertical dimension of each impact point Absorbance.
In at least one embodiment, the electronic equipment 1 is according to the corresponding vertical dimension of each impact point and air Scattering coefficient calculates the corresponding absorbance of each impact point, and the formula for calculating the corresponding absorbance of each impact point is as follows:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) table Show the corresponding vertical dimension of the impact point.
In at least one embodiment, the electronic equipment 1 is based on described first image and second image, using fog Concentration sealing model calculates the concentration of fog, and adjusts the atmospheric scattering coefficient according to the concentration of the fog.Further, The electronic equipment 1 pre-sets the preset value of multiple atmospheric scattering coefficients, the area of each preset value one fog concentration of correspondence Between scope.For example when fog concentration is when interval range is [5,10], the atmospheric scattering coefficient is 0.2 etc..Certainly, it is described Electronic equipment can also be according to the other modes dynamic adjustment atmospheric scattering coefficient, and the atmospheric scattering coefficient also can be by user Self-defined setting.The concentration of fog is bigger, and the atmospheric scattering coefficient is bigger.
S13, the electronic equipment calculates air light intensity according to described first image and second image.
In at least one embodiment, the electronic equipment 1 according to the gray value of each pixel in described first image and The gray value of each pixel calculates air light intensity in second image, and detailed process includes:
(1) ash of each pixel in the gray value and second image of each pixel in described first image is obtained Angle value.
(2) gray value of all pixels point in described first image is ranked up from big to small, and default position before choosing The gray value of number (such as front 10 pixels), and the gray value of all pixels point in second image is arranged from big to small Sequence, and choose the gray value of front presetting digit capacity.
(3) calculated according to gray value selected in gray value and second image selected in described first image The air light intensity.
In implementing at least one, the electronic equipment is by gray value selected in described first image and described second Selected gray value is added up in image, and calculates an average gray, the average gray is defined as described Air light intensity.
S14, the electronic equipment is corresponding according to each impact point based on described first image and second image Penetrate rate and the air light intensity determines recovery picture.
In at least one embodiment, as shown in figure 3, for the schematic diagram of mist figure imaging model.The mist figure imaging model Expression formula is as follows:
I (x, y)=J (x, y) t (x, y)+A (1-t (x, y)),
Wherein I (x, y) represent pixel coordinate for (x, y) pixel have mist image (described first image, as described in the Two images) in pixel value, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity.
It can be seen from the expression formula, the expression formula for restoring picture is as follows:
Wherein t0For parameter preset.
In at least one embodiment, the electronic equipment can utilize Image Fusion, by described first image and Second image is merged, and obtains the 3rd image.Described image fusion calculation has many kinds, for example, feature based matching Blending algorithm, the blending algorithm based on wavelet transformation etc., the present invention does not do any restriction to described image blending algorithm.
The I (x, y) represents pixel of the pixel coordinate for (x, y) in described first image or second image or institute State the pixel value in the 3rd image
In at least one embodiment, the electronic equipment exports over the display the recovery picture, by the recovery Picture is shown to user.
The present invention obtains the first image that the first photographic head shoots, and obtains the second image that second camera shoots, base In described first image and second image, each impact point of target object is calculated to first photographic head and described the The vertical dimension of two photographic head place straight lines, according to the corresponding vertical dimension of each impact point, calculates each impact point pair The absorbance answered, according to described first image and second image, calculates air light intensity, based on described first image and described Second image, according to the corresponding absorbance of each impact point and the air light intensity restored map is determined.The present invention can be according to mist figure Imaging model is processed the picture for having mist, improves the quality for restoring picture.
As shown in figure 4, the functional block diagram of the embodiment of picture restoring means of the present invention.The picture restoring means 11 is wrapped Include acquisition module 100, computing module 101, setup module 102, determining module 103 and output module 104.Mould alleged by the present invention Block is referred to and a kind of can be performed by processor 13 and can complete the series of computation machine program segment of fixing function, its storage In memory 12.In the present embodiment, will describe in detail in follow-up embodiment three and example IV with regard to the function of each module.
The acquisition module 100 obtains the first image that the first photographic head shoots, and obtains second camera shoots the Two images.
In at least one embodiment, the electronic equipment includes first photographic head and the second camera.Institute The structure for stating the first photographic head and the second camera is typically made up of camera lens, sensor and peripheral circuit.Described first takes the photograph As head can be that black and white photographic head is configured with black and white sensor or colour imagery shot is configured with color sensor.Second shooting Head can be that black and white photographic head is configured with black and white sensor or colour imagery shot is configured with color sensor.
Further, the acquisition module 100 is obtained before described first image and second image, is also included:
Receive the enabled instruction of camera.
Specifically, the action of the application icon of user's click camera triggers the enabled instruction of the camera, described to obtain Delivery block 100 receives the enabled instruction of camera.
The computing module 101 is used for based on described first image and second image, calculates target object each Vertical dimension of the impact point to first photographic head and second camera place straight line.
In at least one embodiment, the target object includes multiple impact points, and each impact point is in first figure As one the first pixel of upper correspondence, one the second pixel of correspondence on second image.Therefore, an impact point correspondence The vertical dimension, that is, the corresponding vertical dimension of first pixel, the corresponding institute of second pixel State vertical dimension.
As shown in Fig. 2 impact point is to the first photographic head and the computation model of the vertical dimension of second camera place straight line Schematic diagram.The computing module 101 be used for using the computation model shown in Fig. 2 calculate each impact point it is corresponding it is described it is vertical away from From including:
(1) the focal length f1 of the first photographic head, the focal length f2 of the second camera and first photographic head are obtained With the second camera apart from T.
(2) each impact point corresponding first pixel in described first image is determined, and on second image Corresponding second pixel.
In at least one embodiment, the computing module 101 is used for using feature extracting method and image matching method Determine first pixel and second pixel.The feature extracting method can be based on gray scale feature extraction, Based on the feature extraction of color, feature extraction based on shape etc..Described image matching process can be template matching method, spy Matching method etc. is levied, the feature extracting method and described image matching process are prior arts, the present invention does not do any restriction.
(3) for each impact point, the first of first pixel and the first intersection point is calculated apart from X1, and calculate described The second distance X2 of the second pixel and the second intersection point.Wherein described first intersection point is the optical axis and first of first photographic head The intersection point of the plane of delineation, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation.
(4) for each impact point, based on the computation model, using the focal length f1 of first photographic head, described The focal length f2 of two photographic head, first apart from X1, second distance X2 and first photographic head and the second camera distance T, calculates corresponding vertical dimension Z=T*f1*f2/ (X1*f2+X2*f1) of each impact point.
The computing module 101 is additionally operable to according to the corresponding vertical dimension of each impact point, calculates each impact point Corresponding absorbance.
In at least one embodiment, the computing module 101 is according to the corresponding vertical dimension of each impact point and greatly Gas scattering coefficient calculates the corresponding absorbance of each impact point, and the formula for calculating the corresponding absorbance of each impact point is as follows:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) table Show the corresponding vertical dimension of the impact point.
In at least one embodiment, the computing module 101 is based on described first image and second image, using mist Gas concentration sealing model calculates the concentration of fog, and adjusts the atmospheric scattering coefficient according to the concentration of the fog.Further Ground, the setup module 102 is used to pre-set the preset value of multiple atmospheric scattering coefficients, each preset value one fog of correspondence The interval range of concentration.For example when fog concentration is when interval range is [5,10], the atmospheric scattering coefficient is 0.2 etc.. Certainly, the computing module 101 can also be according to the other modes dynamic adjustment atmospheric scattering coefficient, the atmospheric scattering system Number also can be arranged by User Defined.The concentration of fog is bigger, and the atmospheric scattering coefficient is bigger.
The computing module 101 calculates air light intensity according to described first image and second image.
In at least one embodiment, gray value of the computing module 101 according to each pixel in described first image And the gray value of each pixel calculates air light intensity in second image, detailed process includes:
(1) ash of each pixel in the gray value and second image of each pixel in described first image is obtained Angle value.
(2) gray value of all pixels point in described first image is ranked up from big to small, and default position before choosing The gray value of number (such as front 10 pixels), and the gray value of all pixels point in second image is arranged from big to small Sequence, and choose the gray value of front presetting digit capacity.
(3) calculated according to gray value selected in gray value and second image selected in described first image The air light intensity.
In implementing at least one, the computing module 101 is by gray value selected in described first image and described Selected gray value is added up in second image, and calculates an average gray, and the average gray is defined as The air light intensity.
The determining module 103 is used for based on described first image and second image, according to each impact point correspondence Absorbance and the air light intensity determine restore picture.
In at least one embodiment, as shown in figure 3, for the schematic diagram of mist figure imaging model.The mist figure imaging model Expression formula is as follows:
I (x, y)=J (x, y) t (x, y)+A (1-t (x, y)),
Wherein I (x, y) represent pixel coordinate for (x, y) pixel have mist image (described first image, as described in the Two images) in pixel value, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity.
It can be seen from the expression formula, the expression formula for restoring picture is as follows:
Wherein t0For parameter preset.
In at least one embodiment, the determining module 103 can utilize Image Fusion, by described first image And second image is merged, and obtains the 3rd image.Described image fusion calculation has many kinds, for example, feature based matching Blending algorithm, the blending algorithm based on wavelet transformation etc., the present invention do not do any restriction to described image blending algorithm.
The I (x, y) represents pixel of the pixel coordinate for (x, y) in described first image or second image or institute State the pixel value in the 3rd image
In at least one embodiment, the output module 104 is used to export the recovery picture over the display, by institute State recovery picture and be shown to user.
The present invention obtains the first image that the first photographic head shoots, and obtains the second image that second camera shoots, base In described first image and second image, each impact point of target object is calculated to first photographic head and described the The vertical dimension of two photographic head place straight lines, according to the corresponding vertical dimension of each impact point, calculates each impact point pair The absorbance answered, according to described first image and second image, calculates air light intensity, based on described first image and described Second image, according to the corresponding absorbance of each impact point and the air light intensity restored map is determined.The present invention can be according to mist figure Imaging model is processed the picture for having mist, improves the quality for restoring picture.
The above-mentioned integrated unit realized in the form of software function module, can be stored in an embodied on computer readable and deposit In storage media.Above-mentioned software function module is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention each The part steps of embodiment methods described.
As shown in figure 5, figure is the structural representation of the electronic equipment of the preferred embodiment that the present invention realizes picture restored method Figure.The electronic equipment 1 includes the portions such as memorizer 12, processor 13, display 14, the first photographic head 15 and second camera 16 Part, is connected communication with upper-part by bus system.
The electronic equipment 1 be it is a kind of can according to the instruction being previously set or store, carry out automatically numerical computations and/or The equipment of information processing, its hardware includes but is not limited to microprocessor, special IC (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), number Word processing device (Digital Signal Processor, DSP), embedded device etc..The electronic equipment 1 includes, but does not limit In:Any one can carry out man-machine interaction with user by modes such as keyboard, mouse, remote control, touch pad or voice-operated devices Electronic product.
For example, personal computer, panel computer, smart mobile phone, personal digital assistant (Personal Digital Assistant, PDA), game machine, IPTV (Internet Protocol Television, IPTV), intelligence Formula Wearable etc..
Network residing for the electronic equipment 1 include but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, it is virtual specially With network (Virtual Private Network, VPN) etc..
The memorizer 12 is used to store the program and various data of a kind of picture restored method, and in the electronic equipment High speed is realized in 1 running, the access of program or data is automatically completed.The memorizer 12 can be electronic equipment 1 External memory storage and/or internal storage.Further, the memorizer 12 can be without physical form in integrated circuit Circuit with store function, such as RAM (Random-Access Memory, random access memory), FIFO (First In First Out) etc..Or, the memorizer 12 can also be the memorizer with physical form, such as memory bar, TF cards (Trans-flash Card) etc..
The processor 13, also known as central processing unit (CPU, Central Processing Unit), is one piece of super large rule The integrated circuit of mould, is the arithmetic core (Core) and control core (Control Unit) of electronic equipment 1.The processor 13 The operating system of the executable electronic equipment 1 and types of applications program, the program code of installation etc., for example picture restores dress Put 11.
The display 14 can be with the equipment with display function such as display screen.
First photographic head 15 and second camera 16 are the equipment with camera function.First photographic head 15 and Second camera 16 can be that black and white photographic head is configured with black and white sensor or colour imagery shot is configured with color sensor.
With reference to Fig. 1, the memorizer 12 in the electronic equipment 1 stores multiple instruction to realize a kind of picture recovery side Method, the processor 13 can perform the plurality of instruction so as to realize:The first image of the first photographic head shooting is obtained, and is obtained The second image that second camera shoots;Based on described first image and second image, each mesh of target object is calculated Vertical dimension of the punctuate to first photographic head and second camera place straight line;According to the corresponding institute of each impact point Vertical dimension is stated, the corresponding absorbance of each impact point is calculated;According to described first image and second image, air is calculated Light intensity;Based on described first image and second image, according to the corresponding absorbance of each impact point and the air light intensity It is determined that restoring picture.
According to the preferred embodiment of the present invention, each impact point one the first pixel of correspondence, institute in described first image Stating the multiple instruction of the execution of processor 13 also includes:
Obtain focal length, the focal length of the second camera and first photographic head and described of first photographic head The distance of two photographic head;
Determine each impact point corresponding first pixel in described first image, and the correspondence on second image The second pixel;
For each impact point, the first distance of first pixel and the first intersection point, and calculating described second are calculated The second distance of pixel and the second intersection point, wherein optical axis and the first image that first intersection point is first photographic head are put down The intersection point in face, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation;
For each impact point, using the focal length of first photographic head, the focal length of the second camera, first away from With a distance from, second distance and first photographic head with the second camera, calculate that each impact point is corresponding described to hang down Straight distance.
According to the preferred embodiment of the present invention, the multiple instruction that the processor 13 is performed also includes:
Each impact point is calculated according to the corresponding vertical dimension of each impact point and atmospheric scattering coefficient corresponding Rate is penetrated, the corresponding absorbance of each impact point is:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) table Show the corresponding vertical dimension of the pixel.
According to the preferred embodiment of the present invention, the multiple instruction that the processor 13 is performed also includes:
Obtain the gray scale of each pixel in the gray value and second image of each pixel in described first image Value;
The gray value of all pixels point in described first image is ranked up from big to small, and presetting digit capacity before choosing Gray value, and the gray value of all pixels point in second image is ranked up from big to small, and presetting digit capacity before choosing Gray value;
Institute is calculated according to gray value selected in gray value and second image selected in described first image State air light intensity.
According to the preferred embodiment of the present invention, the expression formula of the recovery picture is:
Wherein t0For parameter preset, I (x, y) expression pixel coordinates are the pixel of (x, y) in described first image or institute The pixel value in the second image is stated, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity.
Specifically, the processor 13 refers to related in Fig. 2 correspondence embodiments to the concrete methods of realizing of above-mentioned instruction The description of step, specifically, the processor 23 refers to phase in Fig. 3 correspondence embodiments to the concrete methods of realizing of above-mentioned instruction The description of step is closed, be will not be described here.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematic, for example, the module Divide, only a kind of division of logic function there can be other dividing mode when actually realizing.
The module as separating component explanation can be or may not be it is physically separate, it is aobvious as module The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of module therein can according to the actual needs be selected to realize the mesh of this embodiment scheme 's.
In addition, each functional module in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list Unit both can be realized in the form of hardware, it would however also be possible to employ hardware adds the form of software function module to realize.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie In the case of spirit or essential attributes without departing substantially from the present invention, the present invention can be in other specific forms realized.Therefore, no matter From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power Profit is required rather than described above is limited, it is intended that all in the implication and scope of the equivalency of claim by falling Change is included in the present invention.Any attached associated diagram labelling in claim should not be considered as into the right involved by limiting will Ask.Furthermore, it is to be understood that " an including " word is not excluded for other units or step, odd number is not excluded for plural number.State in system claims Multiple units or device can also be realized by software or hardware by a unit or device.Second grade word is used for table Show title, and be not offered as any specific order.
Finally it should be noted that above example is only to illustrate technical scheme and unrestricted, although reference Preferred embodiment has been described in detail to the present invention, it will be understood by those within the art that, can be to the present invention's Technical scheme is modified or equivalent, without deviating from the spirit and scope of technical solution of the present invention.

Claims (10)

1. a kind of picture restored method, in being applied to electronic equipment, it is characterised in that methods described includes:
The first image of the first photographic head shooting is obtained, and obtains the second image that second camera shoots;
Based on described first image and second image, calculate each impact point of target object to first photographic head and The vertical dimension of second camera place straight line;
According to the corresponding vertical dimension of each impact point, the corresponding absorbance of each impact point is calculated;
According to described first image and second image, air light intensity is calculated;
It is true according to the corresponding absorbance of each impact point and the air light intensity based on described first image and second image Surely picture is restored.
2. picture restored method as claimed in claim 1, it is characterised in that described each impact point is in described first image One the first pixel of correspondence, each impact point of the calculating target object is to first photographic head and the described second shooting The vertical dimension of head place straight line includes:
Focal length, the focal length of the second camera and first photographic head for obtaining first photographic head is taken the photograph with described second As the distance of head;
Determine each impact point corresponding first pixel in described first image, and corresponding on second image Two pixels;
For each impact point, the first distance of first pixel and the first intersection point is calculated, and calculate second pixel The second distance with the second intersection point is put, wherein first intersection point is the optical axis and first plane of delineation of first photographic head Intersection point, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation;
For each impact point, using the focal length of first photographic head, the focal length of the second camera, the first distance, the The distance of two distances and first photographic head and the second camera, calculate each impact point it is corresponding it is described it is vertical away from From.
3. picture restored method as claimed in claim 1, it is characterised in that the corresponding absorbance of each impact point of the calculating Including:
The corresponding absorbance of each impact point is calculated according to the corresponding vertical dimension of each impact point and atmospheric scattering coefficient, The corresponding absorbance of each impact point is:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, and the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) represents institute State the corresponding vertical dimension of pixel.
4. picture restored method as claimed in claim 1, it is characterised in that the calculating air light intensity includes:
Obtain the gray value of each pixel in the gray value and second image of each pixel in described first image;
The gray value of all pixels point in described first image is ranked up from big to small, and chooses the gray scale of front presetting digit capacity Value, and the gray value of all pixels point in second image is ranked up from big to small, and choose the ash of front presetting digit capacity Angle value;
Calculate described big according to gray value selected in gray value and second image selected in described first image Gas light intensity.
5. the picture restored method as any one of Claims 1-4, it is characterised in that the expression of the recovery picture Formula is:
J ( x , y ) = I ( x , y ) - A m a x ( t ( x , y ) , t 0 ) + A ,
Wherein t0For parameter preset, I (x, y) expression pixel coordinates are the pixel of (x, y) in described first image or described second Pixel value in image or the 3rd image, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity, the described 3rd Image carries out merging obtaining to described first image and second image.
6. a kind of picture restoring means, it is characterised in that described device includes:
Acquisition module, for obtaining the first image that the first photographic head shoots, and obtains the second image that second camera shoots;
Computing module, for based on described first image and second image, calculating each impact point of target object to institute State the vertical dimension of the first photographic head and second camera place straight line;
The computing module is additionally operable to according to the corresponding vertical dimension of each impact point, calculates each impact point corresponding Penetrate rate;
The computing module is additionally operable to according to described first image and second image, calculates air light intensity;
Determining module, for based on described first image and second image, according to the corresponding absorbance of each impact point and The air light intensity determines restores picture.
7. picture restoring means as claimed in claim 6, it is characterised in that described each impact point is in described first image One the first pixel of correspondence, the computing module be used to calculating each impact point of target object to first photographic head and The vertical dimension of second camera place straight line includes:
Focal length, the focal length of the second camera and first photographic head for obtaining first photographic head is taken the photograph with described second As the distance of head;
Determine each impact point corresponding first pixel in described first image, and corresponding on second image Two pixels;
For each impact point, the first distance of first pixel and the first intersection point is calculated, and calculate second pixel The second distance with the second intersection point is put, wherein first intersection point is the optical axis and first plane of delineation of first photographic head Intersection point, second intersection point is the intersection point of the optical axis of the second camera and second plane of delineation;
For each impact point, using the focal length of first photographic head, the focal length of the second camera, the first distance, the The distance of two distances and first photographic head and the second camera, calculate each impact point it is corresponding it is described it is vertical away from From.
8. picture restoring means as claimed in claim 6, it is characterised in that the computing module is additionally operable to calculate each target The corresponding absorbance of point includes:
The corresponding absorbance of each impact point is calculated according to the corresponding vertical dimension of each impact point and atmospheric scattering coefficient, The corresponding absorbance of each impact point is:
T (x, y)=eβd(x,y),
Wherein β represents atmospheric scattering coefficient, and the coordinate of the corresponding pixel of one impact point of (x, y) expression, d (x, y) represents institute State the corresponding vertical dimension of pixel.
9. picture restoring means as claimed in claim 6, it is characterised in that the computing module is additionally operable to calculate air light intensity Including:
Obtain the gray value of each pixel in the gray value and second image of each pixel in described first image;
The gray value of all pixels point in described first image is ranked up from big to small, and chooses the gray scale of front presetting digit capacity Value, and the gray value of all pixels point in second image is ranked up from big to small, and choose the ash of front presetting digit capacity Angle value;
Calculate described big according to gray value selected in gray value and second image selected in described first image Gas light intensity.
10. the picture recovery system as any one of claim 6 to 9, it is characterised in that the expression of the recovery picture Formula is:
J ( x , y ) = I ( x , y ) - A m a x ( t ( x , y ) , t 0 ) + A ,
Wherein t0For parameter preset, I (x, y) expression pixel coordinates are the pixel of (x, y) in described first image or described second Pixel value in image or the 3rd image, t (x, y) represents the corresponding absorbance of pixel, and A represents air light intensity, the described 3rd Image carries out merging obtaining to described first image and second image.
CN201611180932.4A 2016-12-20 2016-12-20 Picture recovery method and apparatus Pending CN106651822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611180932.4A CN106651822A (en) 2016-12-20 2016-12-20 Picture recovery method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611180932.4A CN106651822A (en) 2016-12-20 2016-12-20 Picture recovery method and apparatus

Publications (1)

Publication Number Publication Date
CN106651822A true CN106651822A (en) 2017-05-10

Family

ID=58834730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611180932.4A Pending CN106651822A (en) 2016-12-20 2016-12-20 Picture recovery method and apparatus

Country Status (1)

Country Link
CN (1) CN106651822A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150708A (en) * 2013-01-18 2013-06-12 上海交通大学 Image quick defogging optimized method based on black channel
CN103347171A (en) * 2013-06-27 2013-10-09 河海大学常州校区 Foggy day video processing system and method based on DSPs
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104599266A (en) * 2014-12-31 2015-05-06 小米科技有限责任公司 Detection method for fog area in image, device and terminal
CN104899844A (en) * 2015-06-30 2015-09-09 北京奇艺世纪科技有限公司 Image defogging method and device
CN105118027A (en) * 2015-07-28 2015-12-02 北京航空航天大学 Image defogging method
CN105282421A (en) * 2014-07-16 2016-01-27 宇龙计算机通信科技(深圳)有限公司 Defogged image obtaining method, device and terminal
CN105516547A (en) * 2015-12-10 2016-04-20 中国科学技术大学 Video dehazing optimization method based on DSP (Digital Signal Processor)
CN105513025A (en) * 2015-12-10 2016-04-20 中国科学技术大学 Improved rapid demisting method
CN105657394A (en) * 2014-11-14 2016-06-08 东莞宇龙通信科技有限公司 Photographing method based on double cameras, photographing device and mobile terminal
CN105787904A (en) * 2016-03-25 2016-07-20 桂林航天工业学院 Adaptive global dark channel prior image dehazing method for bright area
CN105844595A (en) * 2016-03-14 2016-08-10 天津工业大学 Method of constructing model for restoring headlight in nighttime traffic video based on atmosphere reflection-scattering principle
CN105913390A (en) * 2016-04-07 2016-08-31 潍坊学院 Image defogging method and system
CN106157360A (en) * 2015-04-28 2016-11-23 宇龙计算机通信科技(深圳)有限公司 A kind of three-dimensional modeling method based on dual camera and device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150708A (en) * 2013-01-18 2013-06-12 上海交通大学 Image quick defogging optimized method based on black channel
CN103347171A (en) * 2013-06-27 2013-10-09 河海大学常州校区 Foggy day video processing system and method based on DSPs
CN105282421A (en) * 2014-07-16 2016-01-27 宇龙计算机通信科技(深圳)有限公司 Defogged image obtaining method, device and terminal
CN105657394A (en) * 2014-11-14 2016-06-08 东莞宇龙通信科技有限公司 Photographing method based on double cameras, photographing device and mobile terminal
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104599266A (en) * 2014-12-31 2015-05-06 小米科技有限责任公司 Detection method for fog area in image, device and terminal
CN106157360A (en) * 2015-04-28 2016-11-23 宇龙计算机通信科技(深圳)有限公司 A kind of three-dimensional modeling method based on dual camera and device
CN104899844A (en) * 2015-06-30 2015-09-09 北京奇艺世纪科技有限公司 Image defogging method and device
CN105118027A (en) * 2015-07-28 2015-12-02 北京航空航天大学 Image defogging method
CN105516547A (en) * 2015-12-10 2016-04-20 中国科学技术大学 Video dehazing optimization method based on DSP (Digital Signal Processor)
CN105513025A (en) * 2015-12-10 2016-04-20 中国科学技术大学 Improved rapid demisting method
CN105844595A (en) * 2016-03-14 2016-08-10 天津工业大学 Method of constructing model for restoring headlight in nighttime traffic video based on atmosphere reflection-scattering principle
CN105787904A (en) * 2016-03-25 2016-07-20 桂林航天工业学院 Adaptive global dark channel prior image dehazing method for bright area
CN105913390A (en) * 2016-04-07 2016-08-31 潍坊学院 Image defogging method and system

Similar Documents

Publication Publication Date Title
CN112052839A (en) Image data processing method, apparatus, device and medium
CN112242004B (en) AR scene virtual engraving method and system based on illumination rendering
US11830103B2 (en) Method, apparatus, and computer program product for training a signature encoding module and a query processing module using augmented data
CN111695609B (en) Target damage degree judging method and device, electronic equipment and storage medium
US9437034B1 (en) Multiview texturing for three-dimensional models
CN106650615B (en) A kind of image processing method and terminal
CN112651881B (en) Image synthesizing method, apparatus, device, storage medium, and program product
CN104751405A (en) Method and device for blurring image
CN112435193B (en) Method and device for denoising point cloud data, storage medium and electronic equipment
CN108230243B (en) Background blurring method based on salient region detection model
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN115526924B (en) Monte Carlo simulated hydrologic environment modeling method and system
CN112734914A (en) Image stereo reconstruction method and device for augmented reality vision
CN111192312B (en) Depth image acquisition method, device, equipment and medium based on deep learning
CN112150595A (en) Point cloud data processing method, device, equipment and medium
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
CN110473281B (en) Method and device for processing edges of three-dimensional model, processor and terminal
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
CN112102169A (en) Infrared image splicing method and device and storage medium
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium
CN110717980A (en) Regional three-dimensional reconstruction method and device and computer readable storage medium
CN106651822A (en) Picture recovery method and apparatus
CN116012732A (en) Video generation method, device, equipment and storage medium
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170510