CN106469306A - Many people image extract real-time based on infrared structure light and synthetic method - Google Patents

Many people image extract real-time based on infrared structure light and synthetic method Download PDF

Info

Publication number
CN106469306A
CN106469306A CN201610856895.8A CN201610856895A CN106469306A CN 106469306 A CN106469306 A CN 106469306A CN 201610856895 A CN201610856895 A CN 201610856895A CN 106469306 A CN106469306 A CN 106469306A
Authority
CN
China
Prior art keywords
image
target
structure light
infrared
infrared structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610856895.8A
Other languages
Chinese (zh)
Other versions
CN106469306B (en
Inventor
罗文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youxiang Computing Technology Co Ltd
Original Assignee
Shenzhen Youxiang Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youxiang Computing Technology Co Ltd filed Critical Shenzhen Youxiang Computing Technology Co Ltd
Priority to CN201610856895.8A priority Critical patent/CN106469306B/en
Publication of CN106469306A publication Critical patent/CN106469306A/en
Application granted granted Critical
Publication of CN106469306B publication Critical patent/CN106469306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of many people image extract real-time based on infrared structure light and synthetic method, depth information be increased to target image by infrared structure light, detect the border of target using depth information, in real time the participant of all video conferences can be extracted from respective video image, then it is synthesized in a sub-picture, thus increasing the acquisition of information of participant, improve Consumer's Experience.The device that the present invention adopts is simple, and cost is relatively low, and algorithm complex is little, can have good practical value with real-time processing video image.

Description

Many people image extract real-time based on infrared structure light and synthetic method
Technical field
The invention belongs to image procossing, computer vision and optical engineering technical field, refer in particular to one kind and be based on infrared structure Many people image extract real-time of light and synthetic method.
Background technology
Video conference is to be built between 2 points or multiple place by transmission channel using computer technology and communication equipment Stand visual multimedia communication, realize a kind of conferencing form of image, voice and data exchange.When holding video conference, The representatives participating in the conference being in different location can receive sound and the meeting-place scene in other side meeting-place, makes representatives participating in the conference same just as being in One meeting-place participation meeting is the same, so can significantly improve work efficiency.
Currently, with regard to video conference research with application tentatively achieve cross-region people carry out aspectant investigation with Collaborative work, but still suffer from following weak point:
(1) acquisition of information is imperfect, and in video conference, participant cannot obtain all participants' in synchronization Audiovisual information;
(2) spatial impression of meeting is not strong with sense of reality, and the participant in different meeting-place does not have space to the perception of whole meeting Sense talks environment it is impossible to construction is real.
Content of the invention
In view of the shortcomings of the prior art, the present invention proposes a kind of many people image extract real-time based on infrared structure light With synthetic method.The participant of all video conferences is extracted from respective video image by the present invention, is then synthesized to In one sub-picture, thus increasing the acquisition of information of participant, improve Consumer's Experience.The device that the present invention adopts is simple, and cost is relatively Low, algorithm complex is little, can have good practical value with real-time processing video image.
In order to realize above-mentioned technical purpose, the technical solution used in the present invention is:
A kind of many people image extract real-time based on infrared structure light and synthetic method, comprise the following steps:
S1, the infrared structure light image of collection target (one of participant) and RGB image
Image capture device using Infrared laser emission device and with infrared fileter gathers the infrared structure of target Light image and RGB image, method is as follows:
When infrared generating laser is opened, launch a plurality of vertical bar straight line to the surface of target, at this moment image acquisition sets The standby infrared structure light image collecting target;When infrared generating laser cuts out, what image capture device collected is mesh Target RGB image;
When image acquisition is carried out to target, Infrared laser emission device is opened and closed according to certain frequency, Make image capture device obtain infrared structure light image and the RGB image of target successively, note is adjacent once open and close red The infrared structure light image that outer generating laser obtains is F (x, y), and RGB image is P (x, y, z), and the size of two width images is M ×N;
S2, pretreatment is carried out to infrared structure light image F (x, y), obtain gray level image F2(x,y);
S3, to image F2(x, y) carries out binaryzation, and note binary image is F3(x,y);
S4, to F3(x, y) carries out rim detection, is then merged four edge detection results images, obtains edge Frame, is designated as FD (x, y);
S5, image procossing is carried out to FD (x, y), remove some isolated points, the new images obtaining are designated as FD4(x,y);
S6, bianry image FD4The non-zero points of (x, y) are exactly the external boundary of target, and some borders have fracture, and this is to select Two points and two points below before breaking part, altogether 4 points carry out averagely, the result obtaining enters row interpolation at this, this Sample obtains the external boundary image FD of a closure5(x,y);
S7, according to external boundary image FD5(x, y) carries out Target Segmentation to target image for P (x, y, z), extracts target Partial image.
S8, using the method in S1 to S7, respectively image zooming-out is carried out to the participant of different location, then extracting Image put on the big figure of setting in advance, thus obtained one inclusion all participants composograph.
In the S2 of the present invention, pretreatment is carried out to infrared structure light image F (x, y), method is as follows:
First, all pixels of traversal F (x, y), adopt the template of 3 × 3, to institute in template window to each pixel Some pixels are ranked up according to the pixel value size of pixel, find out maximum therein and minima, are designated as noise, enter Row is deleted, and then remaining 7 points is carried out averagely, average being assigned to the center pixel that current pixel is template window, note is processed Infrared structure light image afterwards is F1(x,y);
Then, for filtered image F1(x, y) adopts Laplace operator to F1(x, y) is filtered, and obtains new Image is designated as F2(x, y), formula is as follows:
WhereinRepresent and do convolution algorithm.
In the S3 of the present invention, using classical maximum variance between clusters to image F2(x, y) carries out binaryzation, only retains figure As F2Bright fringess information on (x, y), note binary image is F3(x,y).(maximum variance between clusters are by the big Tianjin of Japanese scholars (Nobuyuki Otsu) proposed in 1979, is a kind of method that adaptive threshold value determines, is Da-Jin algorithm, referred to as again OTSU.This patent operates fully according to this method, and the online source code of this method is downloaded, and directly invokes)
In the present invention, the implementation method of S4 is as follows:
S41, according to the feature of infrared vertical bar structure light image, designs four kinds of new template, as follows:
Binary image is F by S423(x, y) respectively with four kinds of new template carry out convolution, then carry out two-value by threshold value Change, obtain four binary image { FDi(x, y) | i=1,2,3,4 }, concrete formula is as follows
WhereinRepresent convolution, { Di| i=1,2,3,4 } it is above-mentioned four kinds of new template, TH is the threshold value of binaryzation, typically takes Value 5.
S43 is by four binary image { FDi(x, y) | i=1,2,3,4 } merged, each pixel is according to OR Operated, thus obtained the more complete marginal information image of ratio, be designated as FD (x, y).
FD (x, y)=FD1(x,y)|FD2(x,y)|FD3(x,y)|FD4(x,y)
Wherein | it is step-by-step or operator, this illustrates only when the whole four width binary images in a certain position are 0, The fusion results of this position are 0 eventually, and other situation fusion results are 1.
In S5 of the present invention, in order to prevent the interference of some of them noise, need to delete some isolated points.Concretely comprise the following steps:
S51, for bianry image FD (x, y), each of traversing graph picture non-zero pixels point, calculates centered on it, size The sum of all pixels value in the field of 5x5, if less than 2 then it is assumed that this non-zero pixels point is noise, its value is changed to 0;At note Image after reason is FD2(x,y).
S52 in order to obtain the external border information of target, to FD2(x, y) is carried out by column scan, and every string only retains line number Two big and minimum non-zero pixels values, other non-zero pixels values all reset to zero, and the new images obtaining are designated as FD3(x,y).
S53 calculates image FD3Two nonzero values (namely line number minimum and maximum two nonzero values) of (x, y) every string Distance, set up array array to keep result, then array is a 1 dimension group, length be image columns N.Pick Except the catastrophe point in array array, method is as follows:Each element of traversal array, if the difference of current point and former and later two points Different it is all higher than 10 then it is assumed that this puts as catastrophe point.In image FD3Two on corresponding for all of catastrophe point row on (x, y) Nonzero value resets to 0, and the new images obtaining are designated as FD4(x,y).
In S7 of the present invention, according to the external boundary image FD obtaining in S65(x, y) carries out mesh to target image for P (x, y, z) Mark segmentation, method is as follows:Processed by row, according to external boundary image FD5(x, y) its each column only has two nonzero values, for External boundary image FD5Either rank y on (x, y)m, external boundary image FD5In y in (x, y)mThe corresponding row of two nonzero values of row Number is respectively xn1And xn2, then to the y for P (x, y, z) for the target imagemRow, only retain line number from xn1To xn2Between picture Element, using same method external boundary image FD5After (x, y) upper all of row are processed, the image of target part also by Extract.
The present invention increased depth information by infrared structure light to target image, detects the side of target using depth information The participant of all video conferences can be extracted from respective video image by boundary in real time, is then synthesized to a secondary figure In picture, thus increasing the acquisition of information of participant, improve Consumer's Experience.The device that the present invention adopts is simple, and cost is relatively low, algorithm Complexity is little, can have good practical value with real-time processing video image.
Brief description
Fig. 1 is that the image capture device using Infrared laser emission device and with infrared fileter gathers the infrared of target Structure light image and the schematic diagram of RGB image;
Fig. 2 is the flow chart of the present invention;
Fig. 3 is the schematic diagram of image synthesis.
Specific embodiment
The invention will be further described with reference to the accompanying drawings and detailed description.
Structure light refers to have certain projection structure, includes the Rhizoma Dioscoreae (peeled) of certain information.These fringe projections are to measured object After on body, if the shape of projection surface is uneven, then the Rhizoma Dioscoreae (peeled) that video camera photographs is just no longer straight line, but there occurs Deflection, deformation.Actually these deformation contain the dimensional surface information of object, therefore carry out three-dimensional measurement based on structure light, It is widely used in the fields such as the reconstruction of threedimensional model, the measurement of profiling object surface.
As shown in figure 1, the image acquisition first with Infrared laser emission device and with infrared fileter for the present invention sets The infrared structure light image of standby collection target and RGB image.When infrared generating laser is opened, launch a plurality of vertical bar straight Line is to the surface of target, the at this moment infrared structure light image of the target that image capture device obtains;When infrared generating laser closes When closing, no longer emitting structural light is to the surface of target, the at this moment RGB image of the target that image capture device obtains.To target When carrying out image acquisition, Infrared laser emission device is opened according to certain frequency (typically taking 50ms to 100ms) and is closed Close so that image capture device obtains infrared structure light image and the RGB image of target successively.Note is adjacent once to be opened and closed The infrared structure light image that Infrared laser emission device obtains is F (x, y), and RGB image is P (x, y, z), and the size of two width images is M×N.
Then the present invention is by carrying out processing the boundary information obtaining target to F (x, y), then mesh from P (x, y, z) Mark extracts, and detailed process is as follows:
(1) infrared structure light image F (x, y) has a lot of noises, including impulse type noise, Gaussian noise, salt-pepper noise Deng if directly to splitting containing noisy image, a lot of image details can be lost, or pseudo-edge occurs.Therefore first To carrying out Image semantic classification, denoising is carried out to image by filtering.
First, infrared structure light image F (x, y) is filtered processing, comprises the following steps that:
The all pixels of traversal F (x, y), adopt the template of 3 × 3, to all of in template window to each pixel Pixel is ranked up according to the value size of pixel, finds out maximum therein and minima, is designated as noise, is deleted Remove, then remaining 7 points are carried out averagely, average being assigned to current pixel (namely center pixel of window).After note is processed Infrared structure light image is F1(x,y).
Above-mentioned process is effectively reduced the random noise of image, but there is also the negative effect that part edge obscures.For Solve this problem, can adopt again for filtered image and sharpen spatial filter, project the details in image Or strengthen, the present invention adopts Laplace operator to F1(x, y) is filtered, and obtains new image and is designated as F2(x, y), formula As follows:
WhereinRepresent and do convolution algorithm.
(2) after filtering after gray level image F2(x, y), with dark fringe and bright fringess, but in analysis stripe pattern When, only interested in bright fringess, and the transitional region between dark fringe inside, light and shade striped and background are lost interest in.Therefore Target striped namely bright fringess is needed to extract.Here the present invention adopts classical maximum variance between clusters (otsu) to figure As F2(x, y) carries out binaryzation, only retains bright fringess information, and note binary image is F3(x,y).(maximum variance between clusters be by Big Tianjin of Japanese scholars (Nobuyuki Otsu) proposed in 1979, was a kind of method that adaptive threshold value determines, cried big again Tianjin method, abbreviation OTSU.This patent operates fully according to this method, and the online source code of this method is downloaded, and directly adjusts With)
(3) to F3(x, y) carries out rim detection.
The present invention, according to the feature of infrared vertical bar structure light image, devises four kinds of new templates, as follows:
Binary image is F3(x, y) respectively with four kinds of new template carry out convolution, then carry out binaryzation by threshold value, Obtain four binary image { FDi(x, y) | i=1,2,3,4 }, concrete formula is as follows
WhereinRepresent convolution, { Di| i=1,2,3,4 } it is above-mentioned four kinds of new template, TH is the threshold value of binaryzation, typically Value 5.
Then by four binary image { FDi(x, y) | i=1,2,3,4 } merged, each pixel is according to " or fortune Calculate " operated, thus obtain the more complete marginal information image of ratio, be designated as FD (x, y).
FD (x, y)=FD1(x,y)|FD2(x,y)|FD3(x,y)|FD4(x,y)
Wherein | it is step-by-step or operator, this illustrates only when the whole four width binary images in a certain position are 0, The fusion results of this position are 0 eventually, and other situation fusion results are 1.
In order to prevent the interference of some of them noise, need to delete some isolated points.Concretely comprise the following steps:
For bianry image FD (x, y), travel through each non-zero pixels point, calculate centered on it, the field of size 5x5 The sum of interior all pixels value, if less than 2 then it is assumed that this non-zero pixels point is noise, its value is changed to 0.Figure after note process Picture is FD2(x,y).
In order to obtain the external border information of target, to FD2(x, y) is carried out by column scan, and it is maximum that every string only retains line number With the pixel value of two minimum non-zero pixels points, other nonzero values all reset to zero, and the new images obtaining are designated as FD3(x,y).
The external border information disappearance being arranged due to some, leads to the pixel finally retaining in fact to be in the inside of target, institute To also need to screen further.Calculate every string two nonzero values (namely line number minimum and maximum two nonzero values) away from From, set up array array to keep result, then array is a 1 dimension group, length be image columns N.Reject number Catastrophe point in group array, comprises the following steps that:Each element of traversal array, if the difference of current point and former and later two points Different it is all higher than 10 then it is assumed that this puts as catastrophe point.In image FD3Two on corresponding for all of catastrophe point row on (x, y) Nonzero value resets to 0, and the new images obtaining are designated as FD4(x,y).
So bianry image FD4The non-zero points of (x, y) are exactly the external boundary of target, and some borders have fracture, at this moment select Select before breaking part two points and two points below, 4 points are carried out averagely altogether, and the result obtaining enters row interpolation at this, The external boundary image FD of a closure so can be obtained5(x,y).
(6) according to external boundary image FD5(x, y) carries out Target Segmentation to target image for P (x, y, z), extracts target Partial image.
The external boundary image FD being obtained according to step 55(x, y), carries out Target Segmentation to target image for P (x, y, z).Press Row are processed, according to external boundary image FD5(x, y) its each column only has two nonzero values, for external boundary image FD5On (x, y) Either rank ym, external boundary image FD5In y in (x, y)mThe corresponding line number of two nonzero values of row is respectively xn1And xn2, then To the y for P (x, y, z) for the target imagemRow, only retain line number from xn1To xn2Between pixel, external using same method Boundary image FD5After (x, y) upper all of row are processed, the image of target part is also extracted.
(7) according to above-mentioned steps 1 to step 6, all image zooming-out can be carried out to the participant of different location, then extraction The image going out is put on the big figure of setting in advance, as shown in figure 3, indicating the position of each target in advance on the composite image Confidence ceases, and so after extracting a target, directly target image is pasted into corresponding position in composograph, thus Composograph to all participants.

Claims (7)

1. a kind of many people image extract real-time based on infrared structure light with synthetic method it is characterised in that comprising the following steps:
S1, the infrared structure light image of collection target and RGB image
Image capture device using Infrared laser emission device and with infrared fileter gathers the infrared structure light figure of target Picture and RGB image, method is as follows:
When infrared generating laser is opened, launch a plurality of vertical bar straight line to the surface of target, at this moment image capture device is adopted Collect the infrared structure light image of target;When infrared generating laser cuts out, what image capture device collected is target RGB image;
When image acquisition is carried out to target, Infrared laser emission device is opened and closed according to certain frequency so that Image capture device obtains infrared structure light image and the RGB image of target successively, and note is adjacent once to be opened and closed and infrared swash The infrared structure light image that optical transmitting set obtains is F (x, y), and RGB image is P (x, y, z), and the size of two width images is M × N;
S2, pretreatment is carried out to infrared structure light image F (x, y), obtain gray level image F2(x,y);
S3, to image F2(x, y) carries out binaryzation, and note binary image is F3(x,y);
S4, to F3(x, y) carries out rim detection, is then merged four edge detection results images, obtains marginal information figure Picture, is designated as FD (x, y);
S5, image procossing is carried out to FD (x, y), remove some isolated points, the new images obtaining are designated as FD4(x,y);
S6, bianry image FD4The non-zero points of (x, y) are exactly the external boundary of target, and some borders have fracture, and this is to select fracture Place's above two points and two points below, altogether 4 points carry out averagely, the result obtaining enters row interpolation at this, such must External boundary image FD to a closure5(x,y);
S7, according to external boundary image FD5(x, y) carries out Target Segmentation to target image for P (x, y, z), extracts target part Image;
S8, the method adopting in S1 to S7, carry out image zooming-out to the participant of different location, respectively then the figure extracting As putting on the big figure of setting in advance, thus obtain the composograph of all participants of inclusion.
2. the many people image extract real-time based on infrared structure light according to claim 1 and synthetic method, its feature exists In the method for S2 is as follows:
First, all pixels of traversal F (x, y), adopt the template of 3 × 3, to all of in template window to each pixel Pixel is ranked up according to the pixel value size of pixel, finds out maximum therein and minima, is designated as noise, is deleted Remove, then remaining 7 points are carried out averagely, average being assigned to the center pixel that current pixel is template window, after note is processed Infrared structure light image is F1(x,y);
Then, for filtered image F1(x, y) adopts Laplace operator to F1(x, y) is filtered, and obtains new image It is designated as F2(x, y), formula is as follows:
F 2 ( x , y ) = F 1 ( x , y ) ⊗ 0 1 0 1 - 4 1 0 1 0
WhereinRepresent and do convolution algorithm.
3. the many people image extract real-time based on infrared structure light according to claim 1 and 2 and synthetic method, its feature It is, in S3, using classical maximum variance between clusters to image F2(x, y) carries out binaryzation, only retains image F2On (x, y) Bright fringess information, note binary image be F3(x,y).
4. the many people image extract real-time based on infrared structure light according to claim 3 and synthetic method, its feature exists In the implementation method of S4 is as follows:
S41, according to the feature of infrared vertical bar structure light image, designs four kinds of new template, as follows:
- 1 1 0 0 0 0 0 1 - 1 , 0 - 1 1 0 0 0 1 - 1 0 , 1 - 1 1 0 0 0 - 1 1 - 1 , - 1 1 - 1 0 0 0 1 - 1 1 ;
Binary image is F by S423(x, y) respectively with four kinds of new template carry out convolution, then carry out binaryzation by threshold value, obtain To four binary image { FDi(x, y) | i=1,2,3,4 }, concrete formula is as follows
FD i ( x , y ) = [ F 3 ( x , y ) ⊗ D i ] > T H
WhereinRepresent convolution, { Di| i=1,2,3,4 } it is above-mentioned four kinds of new template, TH is the threshold value of binaryzation;
S43 is by four binary image { FDi(x, y) | i=1,2,3,4 } merged, each pixel is carried out according to OR Operation, thus obtains the more complete marginal information image of ratio, is designated as FD (x, y):
FD (x, y)=FD1(x,y)|FD2(x,y)|FD3(x,y)|FD4(x,y)
Wherein | it is step-by-step or operator.
5. the many people image extract real-time based on infrared structure light according to claim 4 and synthetic method, its feature exists In, in S42, TH value is 5.
6. the many people image extract real-time based on infrared structure light according to claim 4 or 5 and synthetic method, its feature It is, the method for S5 is:
S51, for bianry image FD (x, y), each of traversing graph picture non-zero pixels point, calculates centered on it, size 5x5 Field in all pixels value sum, if less than 2 then it is assumed that this non-zero pixels point be noise, its value is changed to 0;Note is processed Image afterwards is FD2(x,y);
S52 in order to obtain the external border information of target, to FD2(x, y) is carried out by column scan, every string only retain line number maximum and Two minimum non-zero pixels values, other non-zero pixels values all reset to zero, and the new images obtaining are designated as FD3(x,y);
S53 calculates image FD3The distance of two nonzero values of (x, y) every string, sets up array array to keep result, then Array is a 1 dimension group, and length is the columns N of image;Reject the catastrophe point in array array, method is as follows:Traversal number Each element of group, if the difference of current point and former and later two points is all higher than 10 then it is assumed that this puts as catastrophe point;In image FD3On (x, y), two nonzero values on corresponding for all of catastrophe point row are reset to 0, the new images obtaining are designated as FD4(x, y).
7. the many people image extract real-time based on infrared structure light according to claim 6 and synthetic method, its feature exists In the method for S7 is:According to the external boundary image FD obtaining in S65(x, y) carries out target to target image for P (x, y, z) and divides Cut, method is as follows:Processed by row, according to external boundary image FD5(x, y) its each column only has two nonzero values, for outside Boundary image FD5Either rank y on (x, y)m, external boundary image FD5In y in (x, y)mThe corresponding line number of two nonzero values of row is divided Wei not xn1And xn2, then to the y for P (x, y, z) for the target imagemRow, only retain line number from xn1To xn2Between pixel, adopt With same method external boundary image FD5After (x, y) upper all of row are processed, the image of target part is also extracted Come.
CN201610856895.8A 2016-09-28 2016-09-28 More people's image extract real-times and synthetic method based on infrared structure light Active CN106469306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610856895.8A CN106469306B (en) 2016-09-28 2016-09-28 More people's image extract real-times and synthetic method based on infrared structure light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610856895.8A CN106469306B (en) 2016-09-28 2016-09-28 More people's image extract real-times and synthetic method based on infrared structure light

Publications (2)

Publication Number Publication Date
CN106469306A true CN106469306A (en) 2017-03-01
CN106469306B CN106469306B (en) 2019-07-09

Family

ID=58230710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610856895.8A Active CN106469306B (en) 2016-09-28 2016-09-28 More people's image extract real-times and synthetic method based on infrared structure light

Country Status (1)

Country Link
CN (1) CN106469306B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215046A (en) * 2018-08-06 2019-01-15 浙江工贸职业技术学院 A kind of Laplace operator edge detection method based on image interpolation arithmetic
CN111524088A (en) * 2020-05-06 2020-08-11 北京未动科技有限公司 Method, device and equipment for image acquisition and computer-readable storage medium
CN112204605A (en) * 2019-08-29 2021-01-08 深圳市大疆创新科技有限公司 Extreme point extraction method, extreme point extraction device, and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
CN103795961A (en) * 2012-10-30 2014-05-14 三亚中兴软件有限责任公司 Video conference telepresence system and image processing method thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
CN103795961A (en) * 2012-10-30 2014-05-14 三亚中兴软件有限责任公司 Video conference telepresence system and image processing method thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
俞柏峰 等: "多画面视频图像分割拼接实现方法", 《长江大学学报(自然科学版)》 *
周阿珍: "浅析多点视频会议中的多画面合成方法", 《电子世界》 *
安平 等: "视频会议***中基于图像拼合的中间视合成", 《上海大学学报(自然科学版)》 *
汪陈伍 等: "一种在像素域内实现多画面合成的方法", 《西安邮电学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215046A (en) * 2018-08-06 2019-01-15 浙江工贸职业技术学院 A kind of Laplace operator edge detection method based on image interpolation arithmetic
CN112204605A (en) * 2019-08-29 2021-01-08 深圳市大疆创新科技有限公司 Extreme point extraction method, extreme point extraction device, and computer-readable storage medium
WO2021035621A1 (en) * 2019-08-29 2021-03-04 深圳市大疆创新科技有限公司 Extreme point extraction method and apparatus, and computer-readable storage medium
CN111524088A (en) * 2020-05-06 2020-08-11 北京未动科技有限公司 Method, device and equipment for image acquisition and computer-readable storage medium

Also Published As

Publication number Publication date
CN106469306B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
CN113643289B (en) Fabric surface defect detection method and system based on image processing
CN104050471B (en) Natural scene character detection method and system
CN101443791B (en) Improved foreground/background separation in digitl images
CN104361314B (en) Based on infrared and transformer localization method and device of visual image fusion
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
WO2018023916A1 (en) Shadow removing method for color image and application
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN105894484A (en) HDR reconstructing algorithm based on histogram normalization and superpixel segmentation
CN110008832A (en) Based on deep learning character image automatic division method, information data processing terminal
CN103955949B (en) Moving target detecting method based on Mean-shift algorithm
CN104077577A (en) Trademark detection method based on convolutional neural network
CN102184534B (en) Method for image fusion by using multi-scale top-hat selective transform
CN101527043B (en) Video picture segmentation method based on moving target outline information
CN106296744A (en) A kind of combining adaptive model and the moving target detecting method of many shading attributes
CN106709438A (en) Method for collecting statistics of number of people based on video conference
CN106469306A (en) Many people image extract real-time based on infrared structure light and synthetic method
CN105913407A (en) Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN103514608A (en) Movement target detection and extraction method based on movement attention fusion model
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN111222432A (en) Face living body detection method, system, equipment and readable storage medium
CN114005081A (en) Intelligent detection device and method for foreign matters in tobacco shreds
CN105046670A (en) Image rain removal method and system
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant