CN102637293A - Moving image processing device and moving image processing method - Google Patents

Moving image processing device and moving image processing method Download PDF

Info

Publication number
CN102637293A
CN102637293A CN2011100377184A CN201110037718A CN102637293A CN 102637293 A CN102637293 A CN 102637293A CN 2011100377184 A CN2011100377184 A CN 2011100377184A CN 201110037718 A CN201110037718 A CN 201110037718A CN 102637293 A CN102637293 A CN 102637293A
Authority
CN
China
Prior art keywords
subframe
frame
distance parameter
prospect
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011100377184A
Other languages
Chinese (zh)
Other versions
CN102637293B (en
Inventor
三好雅则
伊藤诚也
李媛
沙浩
王瑾绢
吕越峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to CN201110037718.4A priority Critical patent/CN102637293B/en
Priority to JP2012012013A priority patent/JP2012168936A/en
Publication of CN102637293A publication Critical patent/CN102637293A/en
Application granted granted Critical
Publication of CN102637293B publication Critical patent/CN102637293B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a moving image processing device and a moving image processing method. The device and the method are based on an atmosphere model so that videos in foggy days are clear, the image visibility is improved, and in addition, the real-time processing requirements of the images can be preferably met. According to the moving image processing device, in the video defogging process, the video is divided into a core frame used as a main frame and an ordinary frame used as a sub frame, the core frame is calculated again to be used as a distance parameter t(X) and an unplugged point parameter A, the ordinary frame is not subjected to A calculation, and the A of the core frame is used, the t(X) of a region corresponding to the core frame is used for the background part of the ordinary frame, and the t(X) is calculated again for the foreground part of the ordinary frame. According to the device and the method, the application speed of the single-frame image defogging algorithm based on the atmosphere model in the defogging clarifying processing on the moving images such as videos and the like can be accelerated, a good defogging effect can be obtained, and in addition, the real-time performance of the moving images is ensured.

Description

Motion picture processing device and motion image processing method
Technical field
The present invention relates to motion picture processing device and motion image processing method, can make moving image (being designated hereinafter simply as " greasy weather the video ") sharpening of under weather such as mist, sand and dust, taking.
Background technology
The outdoor video quality monitoring receives boisterous influences such as dense fog, sandstorm usually, descends because of visibility is serious and causes the details of video and remote monitor scene information to lose.And out of doors, the monitoring application scenarios is numerous, and climate change is complicated, and mist and dust and sand weather often take place, and is particularly more frequent and serious under urban road and highway environment.So, under mist and dust and sand weather, improve the active demand that video definition becomes field of video monitoring.Though existing video camera device product has the function that makes the video sharpening, is the mist elimination function, prior art is general to adopt simple image enhancement processing technology, like the image histogram stretching etc., so effect is bad.
In 2002, in paper " Vision and the Atmosphere ", proposed first based on Atmospheric models mist elimination clarification method by people such as NARASIMAHAN.But; The effect based on Atmospheric models mist elimination clarification method that people such as NARASIMAHAN propose is undesirable; And the relevant information that need the image under two width of cloth different weather scenes be obtained current scene as input could be accomplished the processing of mist elimination sharpening afterwards, and the precondition of application is stronger.
In 2008~2009 years; Fattal, people such as Kaiming He have obtained new breakthrough on based on Atmospheric models mist elimination clarification method, propose a kind of new mist elimination clarification method; Do not need multiple image as input; Only utilize present image information just can accomplish the mist elimination sharpening and handle, and mist elimination sharpening effect is better than the existing mist elimination clarification method of simple image enhancement process technology etc. that adopts.These new mist elimination clarification methods are all based on Atmospheric models.So-called " Atmospheric models " have been described when having suspended particle in the atmosphere, and video camera photographs image or the eye-observation optical principle to object.
Atmospheric models can use following formula (1) to represent.
I(X)=J(X)t(X)+A(1-t(X)) (1)
Formula (1) acts on three Color Channels of RGB of image.
Wherein, the band mist image that band mist image that I (X) expression camera head photographs or eye-observation arrive is input picture.X=(x, y) presentation video pixel coordinate.
J (X) is a reflection object light, and expression does not have the image of mist, is mist elimination process result image.
A representes a day ignore parameter, is day the vector data of the image rgb value of an aerial any point (below be called " day ignore ").When not having sky in the current input image, the strongest point of mistiness degree in the image is regarded as a day ignore, the shared sky ignore parameter of all pixels of piece image.I (X), J (X) is the same with A, is the vector data of image RGB.
T (X) has defined the transition function of air dielectric; Described reflection object light to left behind later on and reached the ratio of camera head through the scattering of airborne particles; How many reflection object light expression has can arrive camera head or human eye through after the atmospheric attenuation; Be one greater than 0 and less than 1 scalar data, each pixel has a t (X) in the image.
Below, specify above-mentioned formula (1) with reference to Fig. 6.
Fig. 6 is the synoptic diagram of Atmospheric models formula.The image in Fig. 6 left side is human eye or the observed image I of camera head (X).This image I (X) is made up of 2 parts, and first is J (X) t (X) for reflection object light passes through the part that remains after the airborne particles scattering, and second portion is that the atmospheric environment light that airborne particles scattering sunshine is caused is A (1-t (X)).
Wherein, the definition in the formula (1) t (X) of transition function of air dielectric for being taken the photograph the function of the distance between body (object) and the camera head (human eye), specifically be expressed as following formula (2).
t(X)=e -βd(X) (2)
Wherein, d (X) is the distance between an object point X and the camera head in the image, so also t (X) is called " distance parameter ".β is the atmospheric scattering coefficient, is constant.
Can find out that from formula (1) and formula (2) reflection object light arrives intensity J (X) t (X) of camera head and being inversely proportional to apart from d (X) of object and camera head time; Distance is far away more; Then light attenuation is more severe, and the intensity A (1-t (X)) that atmospheric environment light arrives camera head be directly proportional apart from d (X), apart from far away more; Then light is strong more, so at infinity demonstrate white.
Then, explain to show have with reference to Fig. 7 based on Atmospheric models mist elimination clarification method.
As shown in Figure 7, Fig. 7 (A) is input picture I (X), and Fig. 7 (B) is for having carried out the output image J (X) that the mist elimination sharpening is handled, the sky ignore parameter A of Fig. 7 (C) for calculating, the t (X) of Fig. 7 (D) for calculating.According to Fig. 7, can the mist elimination algorithm based on Atmospheric models simply be reduced: obtaining single width band mist image is under the situation of input picture I (X), obtains t (X) and day ignore parameter A, and through type (1) obtains the later result images J (X) of mist elimination then.
And existing single frames mist elimination clarification method provides different algorithms for how obtaining t (X) with A, and these algorithms can reach good mist elimination effect at present, are better than the mist elimination algorithm that strengthens based on simple image far away.Below, in table 1, given an example 3 about obtaining the method for t (X) and A.
Table 1 is based on Atmospheric models mist elimination new algorithm
Figure BSA00000434131300031
Strengthen and come the disposal route of mist elimination sharpening to compare with traditional simple image of passing through, these methods have better mist elimination effect.But a big weak point of these methods is that algorithm speed is too slow, and real-time is very poor.And in field of video monitoring, mist elimination belongs to the pre-service of video, compares video compress or video content analysis, and the processing of mist elimination sharpening need the system consumption with minimum be accomplished in the fast as far as possible time.In form 2, some processing times have been enumerated based on the mist elimination sharpening of existing algorithm.
Table 2 is based on the processing time of Atmospheric models mist elimination sharpening
Often relate to the multiframe situation in actual video monitoring or field of video compression, and prior art mainly studies how under the single frames situation, to improve mist elimination sharpening effect, the application under the multi-frame video situation is not further studied to single frames mist elimination algorithm.Can know according to table 2, want 10 seconds at least for single frames 600x400 image, so, if directly the mist elimination clarification method to single frames of prior art is directly applied in the moving image such as video, then can have a strong impact on real-time.
In addition; Patent documentation 1 (Chinese patent CN101290680A) discloses a kind of greasy weather video clarification method that recovers based on the histogram equalization overcorrect; Particularly; Application in video has proposed a kind of accelerated method to histogram equalization mist elimination clarification method, and the mapping table of before and after video, reusing histogram equalization in the frame improves processing speed.But the single frames mist elimination algorithm in this patent documentation 1 is simple histogram equalization image enchancing method, and this method is not to be directed against the mist elimination Environment Design, and effect is undesirable.
This invention and difference of the present invention are: the single frames mist elimination algorithm of 1 this invention is simple histogram equalization image enchancing method, and this method is not to the mist elimination Environment Design, and effect is not desirable especially; Single frames mist elimination algorithm of the present invention is up-to-date based on the special-purpose mist elimination sharpening of Atmospheric models Processing Algorithm, and this method is to the mist elimination Environment Design, and is better than traditional image enhancement algorithms effect.2: the histogram equalization mapping table is reused in this invention between multiframe; According to the current scene degree of depth, atmospheric environment can the global change physical characteristics, reuses indeclinable t (X) part, has better mist elimination effect between multiframe in the present invention.
Summary of the invention
The purpose of this invention is to provide a kind of motion picture processing device and motion image processing method, make greasy weather video sharpening, improve picture visibility, and satisfy the real-time treatment requirement of image preferably based on Atmospheric models.
The present invention is conceived to existing various single frames mist elimination algorithms are applied to multi-frame video monitoring or general field of video processing effectively apace; Inventors of the present invention etc. are from the physical significance of t (X); It is only relevant with the depth of field and the atmospheric environment of current scene to have analyzed t (X); T (X) most of zone at image between the frame of front and back can not produce great changes, and the prospect that only in image, has object of which movement partly changes.Because the motion of object possibly cause object to the direct variable in distance of camera head.Its ignore parameter A has been represented the rgb value of day ignore in the image or the denseest point of mist, and this parameter can not change between the frame of front and back equally.That is to say; Key parameter t (X) in the formula (1) is relevant with atmospheric environment with the A and the current scene degree of depth; These parameters are only upgraded in local slow for integral image; So the present invention utilizes this characteristic of t (X) and A, in the video mist elimination clarification method based on Atmospheric models, between the frame of front and back, reuses part t (X) and A.For the background of image, the part that does not change between the frame of front and back is reused t (X); For the prospect of image, the part that does not change between the frame of front and back recomputates t (X).Background and prospect part are reused a day ignore parameter A simultaneously.
To achieve these goals; Motion picture processing device of the present invention, the moving image according to the moving image generation of input is exported is characterized in that; Possess: input block, input is taken the moving image that forms through outside or built-in camera head to taking the photograph body; Processing unit carries out Flame Image Process to the moving image of this input; And output unit, output is by the moving image after the Flame Image Process; Said processing unit is analyzed a plurality of frames in the moving image of said input, judges the change whether scene is arranged in each frame; The frame that the scene change will be arranged is as prime frame; The frame that will not have the scene change is as subframe, and the prime frame before more said subframe and the said subframe is divided into vicissitudinous prospect part and unconverted background parts with said subframe; To said prime frame; According to and this prime frame in said camera head and the distance parameter of being taken the photograph the distance dependent between the body carry out Flame Image Process, to the background parts of said subframe, carry out Flame Image Process according to the said distance parameter of the prime frame before the said subframe; To the prospect part of said subframe, carry out Flame Image Process according to the distance parameter that calculates based on the variation in this subframe.
According to motion picture processing device of the present invention and motion image processing method,, better than traditional image enhancement algorithms effect to the mist elimination Environment Design.And according to the current scene degree of depth, atmospheric environment can the global change physical characteristics, reuses indeclinable t (X) part, has better mist elimination effect between multiframe.Therefore; The present invention both overcome existing based on Atmospheric models single-frame images mist elimination clarification method in the slow problem of Video Applications medium velocity; And when improving picture visibility; Can satisfy the real-time treatment requirement of image,, be suitable for too for general Video processing certainly so be specially adapted to and field of video monitoring.
Description of drawings
Fig. 1 has the block diagram of structure of the system of motion picture processing device of the present invention for expression.
Fig. 2 is the figure of embodiments of the invention 1, and wherein, Fig. 2 (a) is a process flow diagram.Fig. 2 (b) is the synoptic diagram that each frame in the video is divided into core frames and normal frames.
Fig. 3 utilizes prior art and embodiment 1 to carry out the effect contrast figure of Flame Image Process respectively.
Fig. 4 is the figure of expression embodiments of the invention 2, and wherein, Fig. 4 (a) is the depth map of core frames, and Fig. 4 (b) is the depth map of normal frames, and Fig. 4 (c) is the process flow diagram of the embodiment of the invention 2.
Fig. 5 is the figure of expression embodiments of the invention 2, and wherein, Fig. 5 (a) and Fig. 5 (b) are the figure in the foreground moving zone of the prospect part of expression normal frames, and Fig. 5 (c) is the process flow diagram of the embodiment of the invention 3.
Fig. 6 is the synoptic diagram of Atmospheric models formula.
Fig. 7 is the effect synoptic diagram based on Atmospheric models mist elimination sharpening.
Embodiment
Below, with reference to Fig. 1 motion picture processing device of the present invention is described.
Fig. 1 is the block diagram of the structure of expression motion picture processing device of the present invention.
As shown in Figure 1, motion picture processing device of the present invention comprises input block 100, processing unit 200, output unit 300 and shared storage 90.Input block 100 is used to import through outside or built-in camera head (scheming not shown) is taken the moving images such as video that form to taking the photograph body.Processing unit 200 is used for the video from input block 100 inputs is carried out Flame Image Process.Output unit 300 for example is a display, is used to show the video of handling through processing unit 200, and shared storage 90 is used for store various kinds of data.
Processing unit 200 comprises that frame separative element 10, core frames parameter calculation unit 20, core frames parameter storage unit 30, normal frames parameter renegotiation are with unit 40, image mist elimination unit 50 and overall control module 60.
Frame separative element 10 is used for each frame of the video of importing from input block 100 is analyzed, and judges the change whether scene is arranged in each frame, and it is prime frame as core frames that the frame of scene change will be arranged, and will not having the frame of scene change is subframe as normal frames.
Core frames parameter calculation unit 20 calculates the various parameters of this core frames for the core frames of being imported; These parameters comprise as the transition function t (X) of distance parameter and day ignore parameter A, adopt single frames to calculate based on the relevant prior art of Atmospheric models mist elimination algorithm and get final product.
Core frames parameter storage unit 30 is stored the various parameters of the core frames that calculates through core frames parameter calculation unit 20.
The background parts that the normal frames parameter renegotiation is divided into normal frames vicissitudinous prospect part and does not have to change with unit 40, and confirm the prospect part of normal frames and each parameter of background parts respectively.For example; To the vicissitudinous prospect part in the normal frames; The normal frames parameter renegotiation recomputates transition function t (X) and day ignore parameter A with unit 40; And for the background parts that not having variation in the normal frames; Do not recomputate transition function t (X) and day ignore parameter A, but will be stored in the transition function t (X) and transition function t (X) and the sky ignore parameter A of day ignore parameter A of the core frames (being generally the last core frames of this normal frames) in the core frames parameter storage unit 30 as this normal frames.
Image mist elimination unit 50 utilizes each parameter that calculates through core frames parameter calculation unit 20 core frames to be carried out Flame Image Process such as mist elimination sharpening, and utilizes and normal frames is carried out Flame Image Process such as mist elimination sharpening through the normal frames parameter renegotiation with each parameter that unit 40 obtains.Flame Image Process such as these mist elimination sharpenings adopt prior art to get final product.
Each of 60 pairs of processing units 200 of overall situation control module constitutes the unit or module is carried out overall situation control.
More than, an example of motion picture processing device of the present invention has been described, but has been the invention is not restricted to this certainly, in aim of the present invention, can carry out various variations.For example, the processing unit 200 of Fig. 1 is made up of a plurality of unit, can certainly a plurality of unit integrals be formed a module and realize.
Below, with reference to Fig. 2 embodiments of the invention 1 are described.Wherein, Fig. 2 (a) is a process flow diagram.Fig. 2 (b) is the synoptic diagram that each frame in the video is divided into core frames and normal frames.
Motion picture processing device of the present invention based on the video mist elimination sharpening flow process of Atmospheric models shown in Fig. 2 (a); At first; Taken the photograph video that body form from outside or built-in camera head input by this camera head shooting through input media 100, below this video is called input picture I (X) (step S0).
Then, frame separative element 10 is judged the whether occurrence scene conversion (step S1) of present frame in the video of being imported.This judge scene whether the method for conversion can realize with existing algorithm.
In step S1, if frame separative element 10 is judged as scene change has taken place, then present frame is judged as core frames, then carry out step S2; If being judged as does not have the occurrence scene conversion, then present frame is judged as normal frames, then carry out step S3.
Below, with reference to the step S1 among Fig. 2 (b) further explain Fig. 2 (a).
Shown in Fig. 2 (b), video comprise frame 1,2 ..., N+2, wherein, use the frame 1 that heavy line representes to be core frames with frame N, remaining frame is a normal frames.Core frames adopts different mist elimination algorithms with normal frames.The division of core frames and normal frames can be taked 2 kinds of modes.First kind of mode is in video, to choose core frames with fixed intervals, and it is core frames that for example per 300 frames are chosen a frame, and all the other then are normal frames.The second way is that in video, when occurrence scene switches, to choose present frame be core frames, and non-scene is a normal frames when switching.So-called " scene switching " means current scene and changes, and the background of environment changes.And, had a lot of mature technologies and method at present for the detection that scene is switched.
Then, return Fig. 2 (a).In step S2, adopt existing single frames based on Atmospheric models mist elimination algorithm through core frames parameter calculation unit 20, calculate the transition function t (X) and a day ignore parameter A of this core frames.Then, according to formula (1), core frames is carried out the processing of mist elimination sharpening obtain image J (X) (step S7).At last, the image J (X) (step S8) that exports after being processed through output unit 300.
At step S3, the normal frames parameter renegotiation is divided into normal frames background parts (being the moving region) and vicissitudinous prospect part (being stagnant zone) these two parts that do not have variation with unit 40.Cutting apart of background parts and prospect part can use existing motion detection technique to realize, fairly simple way be before and after frame subtract each other and can find out present frame motion change part.
For the background parts of normal frames, execution in step S4, the normal frames parameter renegotiation is handled with the 40 pairs of background parts in unit, reuses the transition function of the t (X) in the corresponding zone of last core frames as this background parts, makes t1 (X) in this note.And for the prospect part of normal frames, execution in step S5 recomputates the transition function t (X) of this part, makes t2 (X) in this note, and computing method adopt existing single frames based on Atmospheric models mist elimination algorithm equally.
After completing steps S4 and step S5; Execution in step S6; Obtain the transition function t (X) of current normal frames according to the transition function t1 (X) that in step S4, obtains and the transition function t2 (X) (for example with t1 (X) and t1 (X) addition) that in step S5, obtains, with the sky ignore parameter of core frames sky ignore parameter A as this normal frames.Then, according to Atmospheric models formula (1), this normal frames is accomplished the mist elimination sharpening handle the image J (X) (step S7) that obtains.At last, the image J (X) (step S8) that exports after being processed through output unit 300.
The processing speed of motion picture processing device of the present invention shown in Figure 1 then, is described according to table 3.
The contrast of table 3 prior art and processing speed of the present invention
Figure BSA00000434131300081
The data of in table 3, being put down in writing are data estimators.Wherein, in " prior art " delegation, supposed to adopt the method for paper Single image haze remove using dark channel prior, its single image processing time for 600x400 is 10 seconds.
In " the present invention " delegation; Because it is not high that the field of video monitoring scene is switched occurrence frequency; Suppose to take place for 10 seconds a scene at this and switch, per second video acquisition speed was 30 frame/seconds, then in this 300 frame the inside a core frames was arranged; 299 normal frames (the actual monitored scene switching interval time, the core frames occurrence number was littler greater than 10 seconds).If take single frames based on Atmospheric models mist elimination algorithm for each frame, 10 seconds consuming time of then every frame.That is to say that if adopt the present invention, then the core frames processing time is 10 seconds; For normal frames; Consider subregion transfixion in the video monitoring, many times do not have moving object, so suppose that moving object on average accounts for image area 5% in each normal frames; Then the processing time of normal frames is 10 * 5%=0.5 second, is (1 * 10+299 * 0.5)/300=0.53 second for the total average handling time of 300 frames like this.Consider actual video monitoring occasion, it is less than normal that scene switching average time and moving object account for the image area average proportions, and speed raising advantage of the present invention can further improve.
Can know that from table 3 motion picture processing device of the present invention has improved the processing speed of in video, using based on Atmospheric models single frames mist elimination clarification method.
The picture quality of the processing of motion picture processing device of the present invention shown in Figure 1 then, is described according to Fig. 3.
Fig. 3 is each image in the processing procedure of motion picture processing device of the present invention shown in Figure 1.
Wherein, Fig. 3 (a) is a core frames, is that the original image of band mist is input picture I (X).Fig. 3 (b) is a normal frames; Be assumed to be the 8th frame after the core frames; Suppose that in this normal frames the part in the black box zone of below is the vicissitudinous prospect part of this normal frames with respect to the core frames shown in Fig. 3 (a), the background parts of other zones beyond this square frame for not having to change.Fig. 3 (c) is the result images that adopts existing single frames mist elimination algorithm to obtain for this normal frames shown in Fig. 3 (b).3 (d) are the result images that adopts the present invention's motion picture processing device shown in Figure 1 to obtain through flow process shown in Figure 2.
Fig. 3 (d) after contrast utilizes the Fig. 3 (c) after prior art is handled and utilizes the present invention to handle, visually effect is similar.And, through the PSNR parameter of calculating chart 3 (c), know PSNR=31.365 with Fig. 3 (d).The PSNR parameter is used for representing original image and is processed the image difference that in Flame Image Process or field of video compression when the PSNR value was between 30~50, video quality was reasonable in field of video compression usually.That is to say that aspect the mist elimination quality, motion picture processing device of the present invention does not obviously descend with respect to single frames mist elimination algorithm.
As stated, the motion picture processing device of embodiments of the invention 1 can not only improve picture visibility, and satisfies the real-time treatment requirement of image preferably.
Below, with reference to Fig. 4 embodiments of the invention 2 are described.Wherein, Fig. 4 (a) is the depth map of core frames, and Fig. 4 (b) is the depth map of normal frames, and Fig. 4 (c) is the process flow diagram of the embodiment of the invention 2.
2 couples of embodiment 1 of this embodiment improve, and difference is the processing for the prospect part of normal frames.In embodiment 1; Partly recomputate the t (X) of this part for the prospect of normal frames; And in embodiment 2; Only just recomputate t (X) satisfying under the certain condition (will specify below) for normal frames prospect part, otherwise do identical processing, reuse the t (X) of the same corresponding region of core frames with background parts.
Aforesaid certain condition is specially: whether the motion of judgment object causes that the depth of field (be object to camera head distance) of this object in image changes; If an object moves to the zone near apart from camera head from the zone far away apart from camera head; Then the motion of this object has caused that object depth of field in image changes, and recomputates t (X) for this part zone; The depth of field in image does not change if the motion of this object causes object, reuses the corresponding t of core frames (X) for this part zone.Its principle is according to above-mentioned formula (2).
In above-mentioned formula (2), transition function t (X) is the function of depth of field d (X) and atmospheric scattering factor beta, and pixel has different d (X) in the image, and the atmospheric scattering factor beta is that constant is relevant with atmospheric environment.The t that in the mist elimination process, obtains (X) can obtain the relative depth information-β d (x) of current scene, though can't learn that object can obtain relative depth information-β d (x) to the concrete distance value of camera head or human eye in the image.The mist elimination sharpening is handled except strengthening clear picture and is outside one's consideration, and can also obtain the relative depth of field of image.So, can obtain the change in depth whether current normal frames moving object campaign produces, thereby make corresponding processing according to the relative depth of field of current scene that the core frames mist elimination obtains.
In Fig. 4 (a), left side first width of cloth figure is the grandfather tape mist figure of core frames, and second width of cloth figure is the transition function t (X) of gained in the core frames mist elimination process; The 3rd width of cloth figure according to t (X) with the relative depth map that formula (2) obtains is-β d (x) figure.The 4th width of cloth figure is that the pixel value with the 3rd secondary figure is divided into 5 zones 130,131,133,133,134 and the relative depth of field of the scene that obtains is cut apart figure (be called for short and make " depth map "), the brightness in these 5 zones from light to dark, the expression distance is near by change far away.When 2 pairs of normal frames of embodiments of the invention are carried out Flame Image Process, with the relative depth map of the scene of utilizing core frames to obtain.
In Fig. 4 (b), left side first width of cloth figure is a normal frames, the moving object of the prospect part in square frame 140 these normal frames of expression.In second width of cloth figure of Fig. 4 (b), mark square frame 140 pairing positions (white portion); Square frame 140 moves in same depth of field zone 133; Promptly when the direction of black arrow moves; The variation of the degree of depth does not take place, and this moment, the transition function t (X) for the prospect part of square frame 140 expressions did not recomputate, and assigned to handle but it is regarded as background portion.If square frame 140 moves along the direction of white arrow, promptly the zone 133 from depth map moves to zone 134, and change in depth takes place, and part that at this moment will square frame 140 expressions assigns to handle as foreground portion, recomputates transition function t (X).
Fig. 4 (c) is the process flow diagram of embodiments of the invention 2.Compare with the process flow diagram of Fig. 2 (a); Difference has 2 points, and the firstth, with the step S2 of step S2-1 replacement Fig. 2 (a), the secondth, with the step S5 of step S5-1~step S5-3 replacement Fig. 2 (a); So omit other identical steps, only step S2-1 and S5-1~step S5-3 described.
Shown in Fig. 4 (c), in step S2-1, calculate the t (X) and day ignore parameter A of full figure except the step S2 of image pattern 2 (b) is such, also need calculate the depth map of current scene.
In step S5-1, the normal frames parameter renegotiation judges with unit 40 whether the motion of the prospect part of normal frames change in depth takes place.If be judged as and change (in step S5-1, being " being "), then then execution in step S5-2 recomputate normal frames the generation change in depth the prospect part t (X) and obtain t2 (X); If be not judged as change (in step S5-1, being " denying "), then follow execution in step S5-3, the prospect that change in depth does not take place of normal frames is partly reused the transition function t (X) of core frames and obtained t2 (X).
Compare with embodiments of the invention 1, embodiment 2 can further improve processing speed, but need in motion picture processing device shown in Figure 1, increase the depth map information that extra storer is stored current scene.And need do dividing processing for the depth of field, be to be used as whether recomputating normal frames t (X) standard in that depth map is trans-regional with foreground moving, so the t (X) that finally obtains in normal frames slightly descends on precision.
Below, with reference to Fig. 5 embodiments of the invention 3 are described.Wherein, Fig. 5 (a) and Fig. 5 (b) are the figure in the foreground moving zone of the prospect part of expression normal frames, and Fig. 5 (c) is the process flow diagram of the embodiment of the invention 3.
In practical application, the video mist elimination is a preprocessing process, often is accompanied by video compress after the mist elimination, and Network Transmission etc. are the critical system task more.Be limited the computing time that each part can consume in the system of such multitask.So for the zone of recomputating t (X) in the normal frames at every turn also is limited.The algorithm time of in multitask system, distributing to the mist elimination sharpening processing of normal frames is T (budget) parameter; Can obtain maximum according to this parameter and can upgrade t (X) elemental area (being maximum regeneration area); Be parameter Max_Update_Size, (0<=Max_Update_Size<=Image Size).When if foreground moving zone summation area is greater than Max_Update_Size in the normal frames; Then only choose the bigger zones of moving region area; Select successively from big to small,, only recomputating of t (X) carried out in the zone of choosing and upgrade to satisfy the Max_Update_Size restriction; T (X) is not then recomputated in moving region to area is less, reuses the core frames part.
Shown in Fig. 5 (a), the prospect of this normal frames partly has 2 bigger moving regions of being represented by white box.In order to satisfy the Max_Update_Size requirement; Can only be shown in Fig. 5 (b); Recomputate the transition function t (X) of the bigger left field of two areas in the moving region; And the less moving region of the area on right side is regarded as background portion and assigns to handle, and reuses the transition function t (X) of core frames counterpart.
In addition, a kind of extreme case commonly used is Max_Update_Size=0, and the mist elimination sharpening is handled and calculated a t (X) and a day ignore parameter A in core frames like this, and remaining all normal frames is all reused the t (X) and the A of core frames.
Shown in Fig. 5 (c), the process flow diagram of this embodiment 3 is compared with the process flow diagram (Fig. 2 (a)) of embodiment 1 and is had only 1 place different, and that uses step S5-4 exactly; S5-5; S5-6 has replaced original step S5, so omit other identical steps, only step S5-1~S5-3 is described.
In step S5-4, according to the area size in different motion zone in the prospect part, come to select successively zones of different according to order from big to small, up to be selected regional sum more than or equal to parameter Max_Update_Size till.
Then, execution in step S5-5 recomputates these selected big regional transition function t (X) of area of coming out in step S5-4, obtains these area relative t2 (X).
Then, execution in step S5-6 does not have selected area than the zonule for step S5-4, reuses the transition function t (X) of core frames, obtains these area relative t2 (X).
According to present embodiment 3, can control the working time of prospect each part partly of normal frames, have very strong practical value.
Motion picture processing device of the present invention and motion image processing method are specially adapted to field of video monitoring, simultaneously also can be used for any and image, the video relevant device, as camera head, demoder, camera etc.

Claims (8)

1. a motion picture processing device carries out Flame Image Process and output to the moving image of importing, it is characterized in that,
Possess: input block, input is taken the moving image that forms through outside or built-in camera head to taking the photograph body; Processing unit carries out Flame Image Process to the moving image of this input; And output unit, output is by the moving image after the Flame Image Process;
Said processing unit,
Analyze a plurality of frames in the moving image of said input, judge the change whether scene is arranged in each frame, the frame that the scene change will be arranged is as prime frame, and the frame that will not have the scene change is as subframe,
Prime frame before more said subframe and the said subframe is divided into vicissitudinous prospect part and unconverted background parts with said subframe,
To said prime frame, according to and this prime frame in said camera head and the distance parameter of being taken the photograph the distance dependent between the body carry out Flame Image Process,
To the background parts of said subframe, carry out Flame Image Process according to the said distance parameter of the prime frame before the said subframe, and, carry out Flame Image Process according to the distance parameter that calculates based on the variation in this subframe to the prospect part of said subframe.
2. like the motion picture processing device of claim 1 record, it is characterized in that,
Vicissitudinous part among said processing unit divides for the foreground portion of said subframe; According to depth map with a plurality of zones; When the quilt in the prospect of the said subframe part take the photograph body mobile be that a certain zone from said depth map is when move in the zone beyond this a certain zone; Recomputate the distance parameter of the prospect part of said subframe; When the quilt in the prospect of the said subframe part take the photograph body mobile be when in the same area, moving, use the identical distance parameter of distance parameter with said subframe prime frame before
Wherein, said depth map is according to the degree of depth from said camera head to the said distance of being taken the photograph body and the prospect of said subframe partly is split to form.
3. like the motion picture processing device of claim 2 record, it is characterized in that,
The vicissitudinous part of said processing unit among the foreground portion of said subframe is divided exists when a plurality of, selects the area the best part among a plurality of vicissitudinous parts, preferentially this area the best part calculated said distance parameter.
4. like the motion picture processing device of claim 3 record, it is characterized in that,
Said processing unit can calculate the area of said distance parameter in having preestablished the prospect part of said subframe maximal value is under the situation of maximum regeneration area; When the area of the vicissitudinous part among said foreground portion is divided surpassed said maximum regeneration area, the vicissitudinous part among the said foreground portion that surpasses said maximum regeneration area divided was used the identical distance parameter of distance parameter with said subframe prime frame before.
5. a motion image processing method carries out Flame Image Process and output to the moving image of importing, and it is characterized in that having following steps:
Analyze a plurality of frames in the moving image of importing, judge the change whether scene is arranged in each frame, the frame that the scene change will be arranged is as prime frame, and the frame that will not have the scene change is as subframe;
Prime frame before more said subframe and the said subframe is divided into vicissitudinous prospect part and unconverted background parts with said subframe;
To said prime frame, according to and this prime frame in said camera head and the distance parameter of being taken the photograph the distance dependent between the body carry out Flame Image Process;
To the background parts of said subframe, carry out Flame Image Process according to the said distance parameter of the prime frame before the said subframe, to the prospect part of said subframe, carry out Flame Image Process according to the distance parameter that calculates based on the variation in this subframe.
6. like the motion image processing method of claim 5 record, it is characterized in that,
For the vicissitudinous part among the foreground portion branch of said subframe; According to depth map with a plurality of zones; When the quilt in the prospect of the said subframe part take the photograph body mobile be that a certain zone from said depth map is when move in the zone beyond this a certain zone; Recomputate the distance parameter of the prospect part of said subframe; When the quilt in the prospect of the said subframe part take the photograph body mobile be when in the same area, moving, use the identical distance parameter of distance parameter with said subframe prime frame before
Wherein, said depth map is according to the degree of depth from said camera head to the said distance of being taken the photograph body and the prospect of said subframe partly is split to form.
7. like the motion image processing method of claim 6 record, it is characterized in that,
Vicissitudinous part among the foreground portion of said subframe is divided exists when a plurality of, selects the area the best part among a plurality of vicissitudinous parts, preferentially this area the best part is calculated said distance parameter.
8. like the motion image processing method of claim 7 record, it is characterized in that,
The maximal value that in having preestablished the prospect part of said subframe, can calculate the area of said distance parameter is under the situation of maximum regeneration area; When the area of the vicissitudinous part among said foreground portion is divided surpassed said maximum regeneration area, the vicissitudinous part among the said foreground portion that surpasses said maximum regeneration area divided was used the identical distance parameter of distance parameter with said subframe prime frame before.
CN201110037718.4A 2011-02-12 2011-02-12 Moving image processing device and moving image processing method Expired - Fee Related CN102637293B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110037718.4A CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method
JP2012012013A JP2012168936A (en) 2011-02-12 2012-01-24 Animation processing device and animation processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110037718.4A CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method

Publications (2)

Publication Number Publication Date
CN102637293A true CN102637293A (en) 2012-08-15
CN102637293B CN102637293B (en) 2015-02-25

Family

ID=46621679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110037718.4A Expired - Fee Related CN102637293B (en) 2011-02-12 2011-02-12 Moving image processing device and moving image processing method

Country Status (2)

Country Link
JP (1) JP2012168936A (en)
CN (1) CN102637293B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077500A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Image data defogging method and device
CN104112251A (en) * 2013-04-18 2014-10-22 信帧电子技术(北京)有限公司 Method and device for defogging video image data
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN106462947A (en) * 2014-06-12 2017-02-22 Eizo株式会社 Haze removal device and image generation method
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107845078A (en) * 2017-11-07 2018-03-27 北京航空航天大学 A kind of unmanned plane image multithreading clarification method of metadata auxiliary
CN110866486A (en) * 2019-11-12 2020-03-06 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN116249018A (en) * 2023-05-11 2023-06-09 深圳比特微电子科技有限公司 Dynamic range compression method and device for image, electronic equipment and storage medium

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226809B (en) * 2012-01-31 2015-11-25 株式会社日立制作所 Image demister and image haze removal method
KR101394361B1 (en) 2012-11-21 2014-05-14 중앙대학교 산학협력단 Apparatus and method for single image defogging using alpha matte estimation and image fusion
JP6324192B2 (en) * 2014-04-25 2018-05-16 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
CN107945546A (en) * 2017-11-17 2018-04-20 嘉兴四维智城信息科技有限公司 Expressway visibility early warning system and method for traffic video automatic identification
CN107808368A (en) * 2017-11-30 2018-03-16 中国电子科技集团公司第三研究所 A kind of color image defogging method under sky and ocean background
CN109166081B (en) * 2018-08-21 2020-09-04 安徽超远信息技术有限公司 Method for adjusting target brightness in video visibility detection process
JP7421273B2 (en) * 2019-04-25 2024-01-24 キヤノン株式会社 Image processing device and its control method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007042040A (en) * 2005-07-29 2007-02-15 Hexagon:Kk Three-dimensional stereoscopic vision generator
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
CN101290680A (en) * 2008-05-20 2008-10-22 西安理工大学 Foggy day video frequency image clarification method based on histogram equalization overcorrection restoration
US20090251468A1 (en) * 2008-04-03 2009-10-08 Peled Nachshon Animating of an input-image to create personal worlds
CN101699509A (en) * 2009-11-11 2010-04-28 耿则勋 Method for recovering atmosphere fuzzy remote image with meteorological data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
JP2007042040A (en) * 2005-07-29 2007-02-15 Hexagon:Kk Three-dimensional stereoscopic vision generator
US20090251468A1 (en) * 2008-04-03 2009-10-08 Peled Nachshon Animating of an input-image to create personal worlds
CN101290680A (en) * 2008-05-20 2008-10-22 西安理工大学 Foggy day video frequency image clarification method based on histogram equalization overcorrection restoration
CN101699509A (en) * 2009-11-11 2010-04-28 耿则勋 Method for recovering atmosphere fuzzy remote image with meteorological data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董慧颖 等: "基于物理模型的恶化天气下的图像复原方法及应用", 《东北大学学报(自然科学版)》, vol. 26, no. 3, 31 March 2005 (2005-03-31), pages 217 - 219 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077500A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Image data defogging method and device
CN104112251A (en) * 2013-04-18 2014-10-22 信帧电子技术(北京)有限公司 Method and device for defogging video image data
CN106462947B (en) * 2014-06-12 2019-10-18 Eizo株式会社 Demister and image generating method
CN106462947A (en) * 2014-06-12 2017-02-22 Eizo株式会社 Haze removal device and image generation method
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse
CN107451969A (en) * 2017-07-27 2017-12-08 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107451969B (en) * 2017-07-27 2020-01-10 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107845078A (en) * 2017-11-07 2018-03-27 北京航空航天大学 A kind of unmanned plane image multithreading clarification method of metadata auxiliary
CN107845078B (en) * 2017-11-07 2020-04-14 北京航空航天大学 Unmanned aerial vehicle image multithreading sharpening method assisted by metadata
CN110866486A (en) * 2019-11-12 2020-03-06 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110866486B (en) * 2019-11-12 2022-06-10 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium
US12039767B2 (en) 2019-11-12 2024-07-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN116249018A (en) * 2023-05-11 2023-06-09 深圳比特微电子科技有限公司 Dynamic range compression method and device for image, electronic equipment and storage medium
CN116249018B (en) * 2023-05-11 2023-09-08 深圳比特微电子科技有限公司 Dynamic range compression method and device for image, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102637293B (en) 2015-02-25
JP2012168936A (en) 2012-09-06

Similar Documents

Publication Publication Date Title
CN102637293B (en) Moving image processing device and moving image processing method
CN109740465B (en) Lane line detection algorithm based on example segmentation neural network framework
CN102750674B (en) Video image defogging method based on self-adapting allowance
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
CN105631831B (en) Video image enhancing method under the conditions of a kind of haze
CN103218778B (en) The disposal route of a kind of image and video and device
CN103747213A (en) Traffic monitoring video real-time defogging method based on moving targets
CN103186887B (en) Image demister and image haze removal method
CN107451966B (en) Real-time video defogging method implemented by guiding filtering through gray level image
CN104867121B (en) Image Quick demisting method based on dark primary priori and Retinex theories
CN103077504B (en) A kind of image defogging method capable based on self-adaptation illumination calculation
CN102663694A (en) Digital fog effect filter method based on dark primary color channel prior principle
CN105913390B (en) A kind of image defogging method and system
CN104299192A (en) Single image defogging method based on atmosphere light scattering physical model
CN104134194A (en) Image defogging method and image defogging system
CN102447925A (en) Method and device for synthesizing virtual viewpoint image
CN107977942A (en) A kind of restored method of the single image based on multi-focus image fusion
CN103226809B (en) Image demister and image haze removal method
CN105701783A (en) Single image defogging method based on ambient light model and apparatus thereof
CN110443759A (en) A kind of image defogging method based on deep learning
CN107730472A (en) A kind of image defogging optimized algorithm based on dark primary priori
CN115423812B (en) Panoramic monitoring planarization display method
CN109118450A (en) A kind of low-quality images Enhancement Method under the conditions of dust and sand weather
CN104159098B (en) The translucent edge extracting method of time domain consistence of a kind of video
CN102034230A (en) Method for enhancing visibility of image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150225

Termination date: 20190212

CF01 Termination of patent right due to non-payment of annual fee