CN106327493B - A kind of multi-view image object detection method of view-based access control model conspicuousness - Google Patents

A kind of multi-view image object detection method of view-based access control model conspicuousness Download PDF

Info

Publication number
CN106327493B
CN106327493B CN201610712411.2A CN201610712411A CN106327493B CN 106327493 B CN106327493 B CN 106327493B CN 201610712411 A CN201610712411 A CN 201610712411A CN 106327493 B CN106327493 B CN 106327493B
Authority
CN
China
Prior art keywords
saliency maps
projection
image
view
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610712411.2A
Other languages
Chinese (zh)
Other versions
CN106327493A (en
Inventor
徐进
傅志中
郭文波
周宁
李晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201610712411.2A priority Critical patent/CN106327493B/en
Publication of CN106327493A publication Critical patent/CN106327493A/en
Application granted granted Critical
Publication of CN106327493B publication Critical patent/CN106327493B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Abstract

The invention discloses a kind of multi-view image object detection methods of view-based access control model conspicuousness, it include: the scene not being blocked for foreground target, calculate the Saliency maps of multiple multi-view images, utilize the spatial relationship between visual angle, the Saliency maps at two sides visual angle are projected into intermediate objective visual angle, and the Saliency maps for projecting Saliency maps and intermediate visual angle are blended to obtain fusion Saliency maps.Aspect cannot be really mapped in projection by the region that foreground object blocks, projection cavity can be generated by projecting around foreground target in Saliency maps, the projection hole region is considered as background area in fusion Saliency maps.Cavity is projected using multi-angle of view and divides image-region, and the region between projection cavity and image border and the region between the projection cavity of different foreground objects are accordingly to be regarded as background area.In fusion Saliency maps, the significance value of background area derived above is set to zero, and edge clear, the target object without background interference can be obtained after binaryzation.

Description

A kind of multi-view image object detection method of view-based access control model conspicuousness
Technical field
The present invention-belongs to the technical field of image object detection, more particularly to the object detection method to multi-view image.
Background technique
When vision significance refers to that people observe extraneous things, vision system being capable of autonomous exploration, each in perception scene The sensitive information of position.The principle derives from the bionics fiber achievement of human visual system, it builds according to biological neural principle Vertical computation model is configured similarly to the process that nervous system captures, handles external information, realizes the sense of conspicuousness target in scene Know.
It is found by being summarized to the prior art, for simple scenario image, the target detection of view-based access control model conspicuousness is easy The effect obtained, for complex scene image, the object detection method based on single image conspicuousness information tend not into The accurate target area detection of row and judgement.Using more images can supplementary target information, improve accuracy in detection.It is existing The conspicuousness of multiple image calculate the joint conspicuousness detection for focusing primarily upon several similar images, main problem is phase Source like image is limited, is not suitable for practical application.With the development of 3D technology, multi-view image provides another and is based on The conspicuousness target detection approach of multiple image, multi-angle of view conspicuousness can merge more information, inhibit complex background, prominent aobvious Target is write, obtains that background interference is small, sharp-edged well-marked target.
Summary of the invention
Goal of the invention of the invention is: the scene not being blocked for foreground target, provides in a kind of multi-view image Object detection method, to realize target detection and extraction to unobstructed target under complex background scene.
The present invention is directed to the scene that foreground target is not blocked, the Saliency maps of multiple multi-view images is calculated, by two side views The Saliency maps at angle project to intermediate objective visual angle, eliminate since what pixel Discrete Mapping generated is not closed microgroove in perspective view, Obtain the shadow Saliency maps at two sides visual angle;And merge the projection Saliency maps at two sides visual angle with the Saliency maps at intermediate visual angle, Obtain fusion Saliency maps;The hole region that projection generates is considered as background area, is eliminated in fusion Saliency maps;Utilize throwing Shadow cavity, which is divided, carries out image division to fusion Saliency maps, by the empty region and different objects between edge of projection Region between projection cavity is considered as background, eliminates in fusion Saliency maps;To fusion Saliency maps binaryzation, target is obtained Testing result.
A kind of multi-view image object detection method of view-based access control model conspicuousness of the invention, including the following steps:
Step 1: the left, center, right multi-view image for the Same Scene that input foreground target is not blocked, and calculate each visual angle figure The Saliency maps of picture obtain left, center, right Saliency maps;
Step 2: left and right Saliency maps being obtained into left and right perspective view, simultaneously according to pixel projection to middle multi-view image respectively Projection hole region when record projection;The microgroove that is not closed eliminated in left and right perspective view respectively obtains left and right projection conspicuousness Figure;
Step 3: left and right projection Saliency maps and middle Saliency maps are subjected to image co-registration, obtain fusion Saliency maps, and The conspicuousness of the projection hole region of record is eliminated in fusion Saliency maps;
Step 4: the projection hole region based on record carries out image division to fusion Saliency maps, by projection cavity and side Conspicuousness is being merged as background area in region between the projection cavity in region and different images object between edge The conspicuousness of the background area is eliminated in figure;
Step 5: step 4 treated fusion Saliency maps carry out binary conversion treatment, export object detection results.
Compared with prior art, this method is directed to the scene that prospect is not blocked, and can preferably inhibit background, be carried on the back Scape interferes small, sharp-edged well-marked target.
Detailed description of the invention
The flow chart of Fig. 1 specific embodiment of the invention.
Perspective view of the visual angle Fig. 2 or so to middle visual angle.
Fig. 3 microgroove repairs schematic diagram.
Fig. 4 multi-angle of view merges Saliency maps.
Fig. 5 eliminates target ambient background schematic diagram.
Fig. 6 divides image schematic diagram using projection cavity.
Fig. 7 multi-angle of view object detection results schematic diagram.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below with reference to embodiment and attached drawing, to this hair It is bright to be described in further detail.
Referring to Fig. 1, a kind of multi-view image object detection method based on visual angle conspicuousness of the invention, including following step It is rapid:
Step 1: multi-view image (LOOK LEFT, middle visual angle (intermediate mesh for the Same Scene that input foreground target is not blocked Mark visual angle), LOOK RIGHT, a, b as shown in Figure 2, c) and calculate separately its conspicuousness, obtain the Saliency maps at each visual angle, simultaneously Complete visual angle projection.
For example the vision significance of the multi-view image of left, center, right three (a-c of Fig. 2) is calculated using background transcendental method (BP) Figure arrives left Saliency maps (Fig. 2-d), middle Saliency maps (Fig. 2-e), right Saliency maps (Fig. 2-as shown in the d-f of Fig. 2 f).Then the Saliency maps at left and right visual angle are obtained into the two width perspective views at left and right visual angle according to pixel projection to middle visual angle figure, such as schemed Shown in 2 g-h.
Not enclosed region non-mapped in intermediate objective visual angle is recorded simultaneously when projection, wherein enclosed region is not divided into two Kind: first is that since what the discreteness of pixel when visual angle maps generated is not closed lines regions, second is that since generation is blocked at visual angle Project hole region.
Step 2: eliminating in the perspective view at left and right visual angle and be not closed microgroove, obtain left and right projection Saliency maps.
201: extracting left perspective view SLp, right perspective view SRpIn be not closed microgroove, obtain image IL-mask、IR-mask, wherein IL-maskCorresponding SLp, IR-maskCorresponding SRp;And initialize left and right reparation figure ILs=SLp、IRs=SRp
202: to figure ILs、IRsCarry out piecemeal discrete cosine dct transform, and by frequency domain high frequency DCT coefficients (line number or Columns be greater than half transform block row is high or the coefficient of col width position) make DCT inverse transformation after zero setting and obtain image ILp、IRp, Wherein ILpCorresponding ILs, IRpCorresponding IRs
203: using image ILp、IRpFill perspective view SLp、SRpIn not enclosed region, obtain new left and right reparation figure ILs =ILs+ILp∩IL-mask、IRs=IRs+IRp∩IR-mask
204: if working as front left and right reparation figure ILs、IRsIn there is no not being closed microgroove, then follow the steps 202;Otherwise will work as Front left and right reparation figure ILs、IRsAs left and right projection Saliency maps, as shown in figure 3, the wherein corresponding left projection notable figure of Fig. 3-a, 3-b corresponds to right projection Saliency maps.
Step 3: hole region around the fusion of multi-angle of view Saliency maps and elimination target.
In general, when multi-angle of view Saliency maps weighting summation, if left and right visual angle and the visual angle related coefficient at intermediate visual angle are 0.5, the visual angle related coefficient at intermediate visual angle itself is 1, i.e., left projection Saliency maps, middle Saliency maps, right projection Saliency maps Visual angle correlation coefficient r 1, r2, r3 be respectively 0.5,1,0.5, then it is aobvious to obtain left projection according to formula w1=r1/ (r1+r2+r3) The weighting coefficient w1=0.25 of work property figure, formula w2=r2/ (r1+r2+r3) can obtain the weighting coefficient w2=of middle Saliency maps 0.5, formula w3=r3/ (r1+r2+r3) can obtain the weighting coefficient w3=0.25 of left projection Saliency maps, be based on weighting coefficient, right Left projection Saliency maps, middle Saliency maps, right projection Saliency maps, which are weighted summation, can obtain fusion Saliency maps, such as Fig. 4 It is shown.
In fusion Saliency maps, the significance value zero setting of the projection hole region of generation will be blocked due to visual angle, is eliminated Saliency maps after projecting hole region are referring to Fig. 5.
Step 4: dividing image using projection cavity, eliminate background.
Based on the projection hole region recorded in step 1, image division is carried out to fusion Saliency maps.
Referring to Fig. 6-a, wherein 1. area is image object, 2. area is the projection hole region that projection generates, and 3. area is projection The region of empty outside background, and 4. area is not processing region (because it cannot be determined whether being projection cavity outside).
It is divided according to area above, the centre in each pair of cavity from left to right is considered as a target object, projection is empty Morphological scale-space is done in hole, is refined as lines, i.e., empty lines.
Then will be located at the left side edge angle of the left side cavity lines of first aim object and image on the left of image to connect Closed area is formed, the right side edge angle of the right side cavity lines of first aim object and image on the right side of image will be located at and connected Closed area is formed, to two adjacent target objects, right side cavity lines and the right side target object of left side target object The both ends of left side cavity lines connect to form closed area, and merge three of the above closed area to obtain background area, referring to 3 enclosed regions shown in Fig. 6-b.
Finally, by the significance value zero setting of background area in fusion notable figure, as shown in Fig. 7-a.
Step 5: binaryzation being made to the fusion Saliency maps after elimination background interference, threshold value th is the one of image averaging gray scale Half, i.e. th=1/MN* (∑ ∑ S (x, y))/2, wherein M, N are respectively the width and height of image, and S (x, y) indicates gray scale, are enabled It merges in Saliency maps, pixel value of the significance value more than or equal to th is 1, and pixel value of the significance value less than th is 0, is obtained Final object detection results, as shown in Fig. 7-b.
The above description is merely a specific embodiment, any feature disclosed in this specification, except non-specifically Narration, can be replaced by other alternative features that are equivalent or have similar purpose;Disclosed all features or all sides Method or in the process the step of, other than mutually exclusive feature and/or step, can be combined in any way.

Claims (5)

1. a kind of multi-view image object detection method of view-based access control model conspicuousness, characterized in that it comprises the following steps:
Step 1: the left, center, right multi-view image for the Same Scene that input foreground target is not blocked, and calculate each multi-view image Saliency maps obtain left, center, right Saliency maps;
Step 2: left and right Saliency maps being obtained left and right perspective view, recorded simultaneously according to pixel projection to middle multi-view image respectively Projection hole region when projection;The microgroove that is not closed eliminated in left and right perspective view respectively obtains left and right projection Saliency maps;
Step 3: left and right projection Saliency maps and middle Saliency maps being subjected to image co-registration, obtain fusion Saliency maps, and melting It closes in Saliency maps and eliminates the conspicuousness of the projection hole region of record;
Step 4: the projection hole region based on record to fusion Saliency maps carry out image division, will projection cavity and edge it Between region and different images object projection cavity between region as background area, and merge Saliency maps in Eliminate the conspicuousness of the background area;
Wherein, image division is carried out to fusion Saliency maps specifically: be considered as the centre in each pair of projection cavity from left to right One target object, and Morphological scale-space is done to projection cavity, projection cavity is refined as lines, i.e., empty lines;It will be located at The left side cavity lines of first aim object and the left side edge angle of image connect to form closed area on the left of image, will be located at The right side cavity lines of first aim object and the right side edge angle of image connect to form closed area on the right side of image;To adjacent Two target objects, the both ends of the left side cavity lines of the right side cavity lines and right side target object of left side target object connect It connects to form closed area;Merge the closed area of building to obtain background area;
Step 5: step 4 treated fusion Saliency maps carry out binary conversion treatment, export object detection results.
2. the method as described in claim 1, which is characterized in that in step 2, eliminate not being closed in left and right initial projections figure Microgroove includes the following steps:
201: extracting left perspective view SLp, right perspective view SRpIn be not closed microgroove, obtain image IL-mask、IR-mask, wherein IL-mask Corresponding SLp, IR-maskCorresponding SRp;And initialize left and right reparation figure ILs=SLp、IRs=SRp
202: to figure ILs、IRsPiecemeal discrete cosine dct transform is carried out, and inverse by DCT is made after the high frequency DCT coefficients zero setting in frequency domain Transformation obtains image ILp、IRp, wherein ILpCorresponding ILs, IRpCorresponding IRs
203: using image ILp、IRpFill perspective view SLp、SRpIn not enclosed region, obtain new left and right reparation figure ILs=ILs+ ILp∩IL-mask、IRs=IRs+IRp∩IR-mask
204: if working as front left and right reparation figure ILs、IRsIn there is no not being closed microgroove, then follow the steps 202;If otherwise will be current I is schemed in left and right reparationLs、IRsAs left and right projection Saliency maps.
3. method according to claim 1 or 2, which is characterized in that in step 3, by left and right projection Saliency maps and in The weighted sum of Saliency maps obtains fusion Saliency maps, wherein the weighting coefficient w1=r1/ (r1+r2+ of left projection Saliency maps R3), the weighting coefficient w2=r2/ (r1+r2+r3) of middle Saliency maps, the weighting coefficient w3=r3/ (r1+ of right projection Saliency maps R2+r3), parameter r1, r2, r3 is respectively the visual angle phase relation of left projection Saliency maps, middle Saliency maps, right projection Saliency maps Number, value range are 0~1.
4. method according to claim 1 or 2, which is characterized in that in step 3, by the throwing of record in fusion Saliency maps The conspicuousness of shadow hole region is eliminated specifically: sets the significance value of the projection hole region of record in fusion Saliency maps Zero.
5. method according to claim 1 or 2, which is characterized in that in step 4, eliminate background area in fusion Saliency maps The conspicuousness in domain specifically: by the significance value zero setting of background area.
CN201610712411.2A 2016-08-23 2016-08-23 A kind of multi-view image object detection method of view-based access control model conspicuousness Expired - Fee Related CN106327493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610712411.2A CN106327493B (en) 2016-08-23 2016-08-23 A kind of multi-view image object detection method of view-based access control model conspicuousness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610712411.2A CN106327493B (en) 2016-08-23 2016-08-23 A kind of multi-view image object detection method of view-based access control model conspicuousness

Publications (2)

Publication Number Publication Date
CN106327493A CN106327493A (en) 2017-01-11
CN106327493B true CN106327493B (en) 2018-12-18

Family

ID=57742460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610712411.2A Expired - Fee Related CN106327493B (en) 2016-08-23 2016-08-23 A kind of multi-view image object detection method of view-based access control model conspicuousness

Country Status (1)

Country Link
CN (1) CN106327493B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110175956B (en) * 2018-06-19 2021-05-14 聊城市誉林工业设计有限公司 Swinging type sun electric heater

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008039858A1 (en) * 2006-09-28 2008-04-03 Microsoft Corporation Salience preserving image fusion
US7680323B1 (en) * 2000-04-29 2010-03-16 Cognex Corporation Method and apparatus for three-dimensional object segmentation
CN103177247A (en) * 2013-04-09 2013-06-26 天津大学 Target detection method fused with multi-angle information
CN103984944A (en) * 2014-03-06 2014-08-13 北京播点文化传媒有限公司 Method and device for extracting and continuously playing target object images in a set of images
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring
CN104200483A (en) * 2014-06-16 2014-12-10 南京邮电大学 Human body central line based target detection method under multi-camera environment
CN104331901A (en) * 2014-11-26 2015-02-04 北京邮电大学 TLD-based multi-view target tracking device and method
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7680323B1 (en) * 2000-04-29 2010-03-16 Cognex Corporation Method and apparatus for three-dimensional object segmentation
WO2008039858A1 (en) * 2006-09-28 2008-04-03 Microsoft Corporation Salience preserving image fusion
CN103177247A (en) * 2013-04-09 2013-06-26 天津大学 Target detection method fused with multi-angle information
CN103984944A (en) * 2014-03-06 2014-08-13 北京播点文化传媒有限公司 Method and device for extracting and continuously playing target object images in a set of images
CN103985254A (en) * 2014-05-29 2014-08-13 四川川大智胜软件股份有限公司 Multi-view video fusion and traffic parameter collecting method for large-scale scene traffic monitoring
CN104200483A (en) * 2014-06-16 2014-12-10 南京邮电大学 Human body central line based target detection method under multi-camera environment
CN104331901A (en) * 2014-11-26 2015-02-04 北京邮电大学 TLD-based multi-view target tracking device and method
CN104966286A (en) * 2015-06-04 2015-10-07 电子科技大学 3D video saliency detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi camera visual saliency using image stitching;Christopher Wing Hong Ngau 等;《2011 International Conference on Telecommunication Technology and Applications》;20110101;93-98 *
基于视觉显著性的多视点纹理视频编码算法;罗晓林 等;《计算机科学》;20160615;第43卷(第6A期);171-174,183 *
多视角混合分辨率图像的超分辨技术研究;丁兰;《中国优秀硕士学问论文全文数据库 信息科技辑》;20131215;正文第23-24页第3.3节,第52-53页第5.3节 *

Also Published As

Publication number Publication date
CN106327493A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
Gehrig et al. Asynchronous, photometric feature tracking using events and frames
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
Barranco et al. Contour motion estimation for asynchronous event-driven cameras
CN108399610A (en) A kind of depth image enhancement method of fusion RGB image information
RU2382406C1 (en) Method of improving disparity map and device for realising said method
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN104867135B (en) A kind of High Precision Stereo matching process guided based on guide image
Correal et al. Automatic expert system for 3D terrain reconstruction based on stereo vision and histogram matching
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN112801074B (en) Depth map estimation method based on traffic camera
Xiao et al. Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation
Jaimez et al. Motion cooperation: Smooth piece-wise rigid scene flow from rgb-d images
CN104424640A (en) Method and device for carrying out blurring processing on images
KR20110014067A (en) Method and system for transformation of stereo content
CN109752855A (en) A kind of method of hot spot emitter and detection geometry hot spot
RU2419880C2 (en) Method and apparatus for calculating and filtering disparity map based on stereo images
KR20110133416A (en) Video processing method for 3d display based on multi-thread scheme
CN108230402A (en) A kind of stereo calibration method based on trigone Based On The Conic Model
CN105335959B (en) Imaging device quick focusing method and its equipment
CN106327493B (en) A kind of multi-view image object detection method of view-based access control model conspicuousness
Concha et al. An evaluation of robust cost functions for RGB direct mapping
Camplani et al. Accurate depth-color scene modeling for 3D contents generation with low cost depth cameras
Xiang et al. Scene flow estimation based on 3D local rigidity assumption and depth map driven anisotropic smoothness
WO2020115866A1 (en) Depth processing system, depth processing program, and depth processing method
Jiao Optimization of Color Enhancement Processing for Plane Images Based on Computer Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181218

Termination date: 20210823

CF01 Termination of patent right due to non-payment of annual fee