CN107341815A - Strenuous exercise's detection method based on multi-view stereo vision scene flows - Google Patents
Strenuous exercise's detection method based on multi-view stereo vision scene flows Download PDFInfo
- Publication number
- CN107341815A CN107341815A CN201710404056.7A CN201710404056A CN107341815A CN 107341815 A CN107341815 A CN 107341815A CN 201710404056 A CN201710404056 A CN 201710404056A CN 107341815 A CN107341815 A CN 107341815A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msup
- scene flows
- mtd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention is to provide a kind of strenuous exercise's detection method based on multi-view stereo vision scene flows.One:Multiple series of images sequence is obtained using the more mesh cameras demarcated;Two:Image sequence is pre-processed;Three:The design of scene flows energy functional data item;Four:The design of the smooth item of scene flows energy functional;Five:Energy functional ground Optimization Solution;The image pyramid lowest resolution image obtained from step 2 begins to use computation model to calculate;Six:The cluster of scene flows moving region;Seven:Direction of motion dispersion degree assessment models are built, judge whether it is strenuous exercise;Eight:Build moving region kinetic energy size assessment models;Nine:Given threshold, and continuous n frames meet then to trigger alarm during evaluation condition.The present invention estimates using the scene flows based on multi-view stereo vision, and the multiple series of images sequence from Same Scene is obtained by the more mesh cameras demarcated.Strenuous exercise's detection can be effectively carried out using 3-dimensional scene flows.
Description
Technical field
It is specifically a kind of to be based on multi-view stereo vision field the present invention relates to a kind of detection method of strenuous exercise
Strenuous exercise's detection method of scape stream.
Background technology
As the high development of current scientific and technological information technology, the especially mankind obtain in computer vision and artificial intelligence
Breakthrough so that can should much be completed by the work that manpower is completed by computer.Such as video monitoring, it is most general
Time operating method be that monitoring display device is observed by people, then make respective reaction for the anomalous event of generation.Due to people
It can not focus on, to monitor all events occurred in video, being inevitably generated false dismissal phenomenon for a long time.Therefore make
Frame of video is handled with computer and judges whether that anomalous event, which occurs, to be then particularly important.
Generally the camera of video monitoring is fixed in position, i.e. target detection under static background.For big
Target detection classical way under most static backgrounds has following several:Background subtraction, frame differential method and optical flow method.The back of the body
The advantages of scape calculus of finite differences is that amount of calculation is small, and can change renewal background model according to dynamic background, but by background
Alternatively have a great influence.Frame differential method operand also very little, but it is not fine to be showed in stability and robustness.More than
Two methods are difficult to reach preferable effect for strenuous exercise's detection.Optical flow method is to calculate light stream by adjacent two field pictures
, the flow field calculated is 2 dimensions, i.e. only plane motion information but lost depth information.In the feelings of no depth information
Under condition, for strenuous exercise detect, it is difficult to carry out assessing judgement to be, easily causes false-alarm and false dismissal.
Scene flows include 3-dimensional movable information and 3-dimensional case depth information, relative to three kinds of common methods presented hereinbefore
For can represent object surface real motion.Enough information can be obtained using scene flows to judge whether being acute
Strong motion, i.e. scene flows can efficiently solve the problem of strenuous exercise judges.
The content of the invention
It is an object of the invention to propose a kind of adaptable violent fortune based on multi-view stereo vision scene flows of detection
Dynamic detection method.
The object of the present invention is achieved like this:
Step 1:Multiple series of images sequence is obtained using the more mesh cameras demarcated;
Step 2:Image sequence is pre-processed, image sequence is carried out using image pyramid to adopt under multiresolution
Sample, coordinate system conversion, the relation established between image coordinate system and camera coordinate system are carried out according to camera inside and outside parameter;
Step 3:The design of scene flows energy functional data item, using directly fusion 3-dimensional scene stream information and 3-dimensional surface
Depth information, the design of data item is using constant it is assumed that introducing robust penalty simultaneously based on structure tensor;
Step 4:The design of the smooth item of scene flows energy functional, smooth item are used to 3-dimensional flow field V (u, v, w) and 3-dimensional table
The stream driving anisotropy that face depth Z is constrained simultaneously is smooth, and smooth item introduces robust penalty simultaneously;
Step 5:Energy functional ground Optimization Solution, minimization energy functional, obtains Euler-Lagrange equation, then right
Equation solution;The image pyramid lowest resolution image obtained from step 2 begins to use computation model to calculate, Zhi Daoda
To full resolution image;
Step 6:The cluster of scene flows moving region, moving region is clustered using clustering algorithm, disengaging movement region with
Background area, exclude background area;
Step 7:Direction of motion dispersion degree assessment models are built, judge whether it is strenuous exercise;
Step 8:Build moving region kinetic energy size assessment models;
Step 9:Given threshold, and continuous n frames meet then to trigger alarm during evaluation condition.
The present invention can also include:
1st, obtained in step 1 using more mesh cameras for having demarcated after multiple series of images sequence so ask for scene flows V (u, v,
W) with depth information Z.
2nd, established described in step 2 in the relation between image coordinate system and camera coordinate system, establish 2 dimension light streams and 3-dimensional
The relation of scene flows isWherein (u, v) is 2 dimension light streams, (u0, v0) it is that photocentre is sat
Mark.
3rd, the design of the data item described in step 3 uses is specifically included based on the constant hypothesis of structure tensor,
N number of camera is defined as in the constant hypothesis of t and t+1 moment structure tensors:
Reference camera C0It is defined as with other N-1 cameras in the constant hypothesis of t structure tensor:
Reference camera C0It is defined as with other N-1 cameras in the constant hypothesis of t+1 moment structures:
In above-mentioned data item formulaε=0.0001 is penalty so that smoothed approximation is in L1Model
Number,It is that two-value blocks mask, is asked for by the Ouluding boundary region detection technology of stereo-picture, when pixel is to block a littleUnshielding pointITIt is the local tensors such as formula of 2 dimension imagesIt is shown.
4th, the design of the smooth item of scene flows energy functional described in step 4 specifically includes,
Regularization directly is carried out to 3-dimensional flow field and depth information, it is smooth it is assumed that S to design a kind of stream driving anisotropym
And S (V)d(Z) it is to enter row constraint to 3-dimensional flow field and depth information respectively, smooth item design formula is as follows:
Sm(V)=ψ (| u (x, y)x|2)+ψ (| u (x, y)y|2)+ψ (| v (x, y)x|2)+ψ (| v (x, y)y|2)+ψ (| w (x,
y)x|2)+ψ (| w (x, y)y|2)
Sd(Z)=ψ (| Z (x, y)x|2)+ψ (| Z (x, y)y|2)
Whole scene flows estimated energy functional is as follows,
5th, the cluster of the scene flows moving region described in step 6 specifically includes, and will be obtained using clustering algorithm in step 5
Scene flows V (u, v, the w) clusters arrived, separating background and moving region, the characteristic information of scene flows specifically include:Each point scene
U is flowed, tri- components of v, w, the modulus value of each point scene flows areEach point scene flows are put down with xoy
Face, xoz planes, the angle theta of yoz planesx, θy, θz, each put and represent V with 7 dimensional feature vectorsI, j=(u, v, w, | V
|, θx, θy, θz);
Idiographic flow is:Input is the similarity matrix S of the similarity composition of all N number of data points between any twoN×N, rise
Stage beginning all regards all samples as potential cluster centre, then in order to find suitable cluster centre xk, constantly
Attraction Degree r (i, k) and reliability a (i, k) is collected from data sample, to formula No
Disconnected iteration is so as to update Attraction Degree and reliability, and until producing m cluster centre point, wherein r (i, k) is adapted to for describing point k
As the degree of data point i cluster centre, a (i, k) is used for describing appropriateness of the point i selected elements k as its cluster centre;
For moving region, a mark flag is set, if moving region then flag=1, if background area is then
Flag=0, and it is count to count moving region pixel number, setting moving region is spatial neighborhood
6th, the structure direction of motion dispersion degree assessment models described in step 7 specifically include,
Z axis of the regulation based on camera coordinate system is reference vector direction, then calculate each motion vector for trying to achieve with
The included angle of reference directionI, j(t), calculation formula is as follows:
Calculate the φ of each frame moving region pixelI, j(t) variance D (φI, j(t)), whereinFor all angles
Average,
7th, it is as follows by the kinetic energy of each frame moving region of the scene stream calculation calculated:
The mean kinetic energy of moving region is calculated by the total kinetic energy of each frame moving region
8th, step is with regard to middle set angle variance threshold values φthWith kinetic energy threshold value Wth, as D (φI, j(t)) > φth,
When, and continuously there are n frames to meet two above condition, then it is judged as strenuous exercise and triggers alarm.
The present invention estimates using the scene flows based on multi-view stereo vision, is obtained by the more mesh cameras demarcated
Fetch the multiple series of images sequence from Same Scene.The present invention can obtain the scene stream information and scene 3 of more mesh sequence of scenes
Dimension table face depth information, strenuous exercise's detection can be effectively carried out using 3-dimensional scene flows.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention.
Fig. 2 is the image sequence stereoscopic correspondence relation of multi-lens camera collection.
Fig. 3 is solution scene flows algorithm flow chart.
Embodiment
With reference to Fig. 1, strenuous exercise's detection of the invention based on multi-view stereo vision scene flows mainly comprises the following steps:
S1. multiple series of images sequence is obtained using the more mesh cameras demarcated.
S2. the image of input is pre-processed, multiresolution down-sampling is carried out to image sequence using image pyramid.
Coordinate system conversion, the relation established between image coordinate system and camera coordinate system are carried out according to camera inside and outside parameter.
S3. the design of scene flows energy functional data item.With conventional most of constraints being combined using light stream with parallax
Mode is different, and the present invention is using directly fusion 3-dimensional scene stream information and 3-dimensional case depth information.The design of data item uses base
It is constant in structure tensor it is assumed that introducing robust penalty simultaneously.
S4. the design of the smooth item of scene flows energy functional.Smooth item uses deep to 3-dimensional flow field V (u, v, w) and 3-dimensional surface
The stream driving anisotropy spent Z while constrained is smooth, and smooth item introduces robust penalty simultaneously.
S5. energy functional ground Optimization Solution.In order to solve 3-dimensional motion V (u, v, w) with 3-dimensional case depth Z, it is necessary to minimum
Change energy functional, Euler-Lagrange equation is obtained, then to equation solution.Asked to solve big displacement present in scene flows
Topic is introduced by the thick multi-resolution computation scheme to essence.Make the image pyramid lowest resolution image obtained since S2
Calculated with computation model, until reaching full resolution image.S6. the cluster of scene flows moving region.It will be transported using clustering algorithm
Dynamic region clustering, disengaging movement region and background area, exclude background area, facilitate building for follow-up strenuous exercise's judgment models
It is vertical.
S7. compared to easy motion state, the 3-dimensional scene flows direction of motion that the target area of strenuous exercise's state obtains is miscellaneous
Disorderly without chapter.Direction of motion dispersion degree assessment models can be built based on this, judge whether it is strenuous exercise.
S8. compared to easy motion state, the 3-dimensional scene flows numerical value that the target area of strenuous exercise's state obtains is bigger.
Moving region kinetic energy size assessment models can be built based on this.
S9. corresponding threshold value is manually set, and continuous n frames meet then to trigger alarm during evaluation condition.
Illustrate below in conjunction with the accompanying drawings and the present invention is more fully described.
S1. as shown in Fig. 2 obtaining image sequence using the more mesh cameras demarcated.When point in real scene is by t
Be carved into the t+1 moment is moved to from P positionPosition, two points are in each video camera CiCorresponding points are respectively point p in imaging planei
And pointT+1 moment positionsWherein V (u, v, w) is the 3-dimensional motion vector of real world, and u represents real world water
Square to transient motion speed, v represents the transient motion speed of real world vertical direction, and w represents instantaneous on depth direction
Movement velocity.It is light stream that V (u, v, w), which is mapped to 2 dimensions,
S2. the present invention is the 3-dimensional scene flows direct estimation based on multi-view stereo vision, is constrained directly in energy functional true
Real world's 3-dimensional motion flow field V (u, v, w) and 3-dimensional case depth Z.Scene flows energy functional be based on 2 dimensional plane images, because
This needs 3-dimensional space being mapped to 2 dimension spaces by perspective projection transformation, the mapping established between 2 dimension light streams and 3-dimensional scene flows
Relation.I(xi, yi, t) and it is video camera CiIn the image sequence pixel of t, MiIt is on video camera CiProjection matrix.P
(X, Y, Z)TIt is t true coordinate under camera coordinate system, it is as follows that it is mapped to image sequence relational expression:
Wherein MiIt is 3 × 4 projection matrixes, [Mi]1,2It is matrix front two row, [Mi]3It is the third line of matrix.Projection matrix is such as
Shown in formula (2), C is that intrinsic parameters of the camera matrix is only relevant with the internal structure of camera.[R T] is the external parameter of camera
Matrix is determined by orientation of the camera with respect to world coordinate system.
The scene flows obtained based on relation above, which solve energy functional, P (X, Y, Z)T, 6 unknown quantitys of V (u, v, w).Such as
Shown in formula (3), X can be established, relation between Y, Z, 6 unknown quantitys are reduced to 4 unknown quantitys.By N to image sequence,
Z and V are solved, wherein (ox, oy) it is video camera principal point.
Relation between 2 dimension light stream v (u, v) and 3-dimensional scene flows V (u, v, w), as shown in formula (4):
Image pyramid is carried out to image sequence to obtained N and carries out multiresolution down-sampling to image, decimation factor η=
0.9, and the image obtained to every layer carries out gaussian filtering, filters out partial noise.
S3. the design of scene flows energy functional data item.Spatially and temporally using the constant hypothesis of structure tensor simultaneously.According to
Equation below can be obtained according to the constant hypothesis of structure tensor:
N number of camera is in t and t+1 moment structure tensors constant definition such as formula (5):
Reference camera C0Camera is with other N-1 cameras in the constant hypothesis definition of t structure tensor such as formula (6) institute
Show:
Reference camera C0Camera is with other N-1 cameras in the constant hypothesis definition of t+1 moment structure tensors such as formula (7) institute
Show:
In above-mentioned data item formulaε=0.0001 is penalty so that smoothed approximation is in TV-L1
Norm, it is therefore an objective to reduce the influence that collection exterior point solves to functional.ITIt is the local tensors of 2 dimension images, as shown in formula (8).
It is that two-value blocks mask, effect is to neglect to block a pixel.When pixel is to block a littleUnshielding pointIt is to be calculated by the Ouluding boundary region detection technology of stereo-picture, using a kind of screening based on credible figure
Keep off border detection algorithm.Reference camera C can effectively be detected0With the occlusion area between other cameras.
S4. the design of the smooth item of scene flows energy functional.Assuming that for reference camera C0Depth information Z and 3-dimensional flow field V
(u, v, w) is sectionally smooth.Smooth item directly carries out regularization to 3-dimensional flow field and depth information, and flow field has in 3-dimensional space
Slickness, it is smooth it is assumed that so as to ensure scene flows slickness to design a kind of stream driving anisotropy.
Sd(Z)=ψ (| Z (x, y)x|2)+ψ (| Z (x, y)y|2) (10)
SmAnd S (V)d(Z) it is that row constraint is entered to 3-dimensional flow field and depth information respectively, uses for flow field and driven based on 3-dimensional stream
Dynamic anisotropic constraint is smooth.Whole energy functional can be write as shown in formula (11):
S5. the solution scheme of scene flows, that is, Z when finding minimization energy functional to greatest extent, V value.Conventional means
It is that minimization energy functional obtains Euler-Lagrange equation, then solves Euler-Lagrange equation.Energy functional is minimum
Euler-Lagrange equation after change can be write:
Before minimization energy functional, in order to simplify formula by the structure tensor in data item it is constant hypothesis write a Chinese character in simplified form as
Under:
Δi=IT(p0, t) and-IT(pi, t) and (14)
According to variation principle, minimization energy functional E (Z, V), respectively to u, what Z asked local derviation and order is equal to 0, can obtain with
Lower Euler-Lagrange equation:
For v, Ou Lang-Lagrange's equation that w minimization energy functionals obtain is similar to formula (16).Nonlinear problem is deposited
In data item and smooth item, it is crucial that how to avoid being absorbed in local minimum and obtaining during scene flows are solved
To globally optimal solution.
What it is due to detection is strenuous exercise, therefore can have big displacement motion.For solve the problems, such as big displacement use by slightly to
Smart multi-resolution computation scheme.Using the image pyramid obtained in S2, the initial value of scene flows is arranged to 0, from minimum
Resolution ratio starts to calculate, and obtained result adds initial value of the initial value as next resolution ratio, until reaching full resolution image.This
Sample can be eliminated effectively due to calculating inaccurate problem caused by big displacement.Specific solution scheme is as shown in Figure 3.L is figure in Fig. 3
As the pyramid number of plies, V (u, v, w) is only calculated as L >=K, scene flows V (u, v, w) and Z is calculated as 0 < L < K.
S6.3 dimension scene flows motion V (u, v, w) clusters.Due to noise and source of error, the scene flows V being calculated through S5
(u, v, w) also differs in background area is set to null value.Background area scene flows, which are not zero if value, can influence follow-up violent fortune
It is dynamic to judge.Moving region is clustered using clustering algorithm, so separating background and moving region, background area is excluded, can be effective
Assess with carrying out strenuous exercise on ground.
The purpose of clustering algorithm of the present invention is to find optimal representative point set to cause all data points to nearest class generation
The similarity sum of table point is maximum.The brief flow of algorithm is as follows:The input of algorithm is all N number of data points between any two similar
Spend the similarity matrix S of compositionN×N, algorithm initial period all regards all samples as potential cluster centre.For each sample
This point is established as follows with the information definition of the attraction degree of other sample points:
Attraction Degree:R (i, k) is used for describing the degree for the cluster centre that point k is suitable as data point i.
Reliability:A (i, k) is used for describing appropriateness of the point i selected elements k as its cluster centre.
In order to find suitable cluster centre xk, the algorithm constantly from data sample collect evidence r (i, k) and a (i,
k).R (i, k) and a (i, k) iterative formula are as follows:
Algorithm by formula (18) (19) constantly iteration so as to update Attraction Degree and reliability, until produce m high quality
Cluster centre point, while remaining data point is assigned in corresponding cluster.
The scene flows being calculated include the scene flows of background area and the scene flows of moving target, and both scene flows are
There is significant difference.The scene flows each put can be different in amplitude and direction.Therefore algorithm is by the scene flows of each point
The feature of directional information and amplitude information as the point, form the characteristic vector of the point, the row classification of input clustering algorithm.
The characteristic information of scene flows specifically includes:Each point scene flows u, tri- components of v, w, the modulus value of each point scene flows
ForEach point scene flows and xoy planes, xoz planes, the angle theta of yoz planesx, θy, θz.Often
Individual point all represents V with 7 dimensional feature vectorsI, j=(u, v, w, | V |, θx, θy, θz).To the scene with 7 dimensional feature vectors
Stream is clustered, and obtained cluster areas includes background area and moving region.Scene flows are generally based on, video camera is quiet
When only, judge that motion vector belongs to background area close to 0 region in cluster result, other cluster areas are motor area
Domain.
Moving region is isolated with behind background area, setting a mark flag.If moving region flag=1, if
Background area flag=0, and it is count to count moving region pixel number, setting moving region is spatial neighborhood
S7. the moving region scene flows V (u, v, w) that is obtained according to S6 goes the assessment to be, it is necessary to establish suitable assessment models
No is strenuous exercise.
According to scene flows direction of motion situation, direction of motion assessment models are established.Each frame motion has been isolated in S6
Moving region of the target in camera coordinate system.If normal easy motion, analyzing the motion vector direction of its moving region,
It can find to be concentrated mainly on a direction.The direction of motion distribution of strenuous exercise is then more at random.If build the fortune in motor point
Dynamic vector direction histogram, the histogram that strenuous exercise is formed is more discrete, and the histogram that easy motion is formed compares concentration.
For the quantitative direction of motion for removing to assess each motion vector, it is specified that the Z axis based on camera coordinate system is ginseng
Examine direction vector.Angle by calculating each motor point in moving region and reference direction can determine that the direction of motion vector.
Speed u of each pixel of n-th frame under camera coordinate system in horizontal direction is obtained by S5I, j(t), the speed in vertical direction
Spend vI, j(t) and depth direction on speed wI, j(t).Its included angle with reference vectorI, j(t) as shown in formula (20):
In order to judge whether it is strenuous exercise, the φ in all motor points is calculatedI, j(t) variance D (φI, j(t)), such as formula
(21) shown in, whereinFor the average of all angles.
S8. according to the kinergety of moving region, the assessment models of motion energy are established.Calculate each frame motion of scene flows
The kinetic energy in region, it is as follows:
The mean kinetic energy of each pixel of moving region can be calculated by each frame moving region total kinetic energy
S9. artificial set angle variance threshold values φthWith the kinetic energy threshold value W of each pixelth, understood to work as D by S7 and S8
(φI, j(t)) > φth,When, and continuously there are n frames to meet two above condition, then it is judged as abnormal strenuous exercise simultaneously
Trigger alarm.
The present invention has used 3-dimensional scene flows to carry out strenuous exercise's detection first, can preferably realize the inspection of strenuous exercise
Survey warning function.
Claims (8)
- A kind of 1. strenuous exercise's detection method based on multi-view stereo vision scene flows, it is characterized in that comprising the following steps:Step 1:Multiple series of images sequence is obtained using the more mesh cameras demarcated;Step 2:Image sequence is pre-processed, multiresolution down-sampling, root are carried out to image sequence using image pyramid Coordinate system conversion, the relation established between image coordinate system and camera coordinate system are carried out according to camera inside and outside parameter;Step 3:The design of scene flows energy functional data item, using directly fusion 3-dimensional scene stream information and 3-dimensional case depth Information, the design of data item is using constant it is assumed that introducing robust penalty simultaneously based on structure tensor;Step 4:The design of the smooth item of scene flows energy functional, smooth item use deep to 3-dimensional flow field V (u, v, w) and 3-dimensional surface The stream driving anisotropy spent Z while constrained is smooth, and smooth item introduces robust penalty simultaneously;Step 5:Energy functional ground Optimization Solution, minimization energy functional, obtains Euler-Lagrange equation, then to equation Solve;The image pyramid lowest resolution image obtained from step 2 begins to use computation model to calculate, full until reaching Image in different resolution;Step 6:The cluster of scene flows moving region, moving region is clustered using clustering algorithm, disengaging movement region and background Region, exclude background area;Step 7:Direction of motion dispersion degree assessment models are built, judge whether it is strenuous exercise;Step 8:Build moving region kinetic energy size assessment models;Step 9:Given threshold, and continuous n frames meet then to trigger alarm during evaluation condition.
- 2. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step Rapid two relations established in the relation between image coordinate system and camera coordinate system, establish 2 dimension light streams and 3-dimensional scene flows ForWherein (u, v) is 2 dimension light streams, (u0, v0) it is photocentre coordinate.
- 3. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step The design of data item described in rapid three uses to be specifically included based on the constant hypothesis of structure tensor,N number of camera is defined as in the constant hypothesis of t and t+1 moment structure tensors:<mrow> <msub> <mi>D</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>Z</mi> <mo>,</mo> <mi>V</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>o</mi> <mi>m</mi> <mi>i</mi> </msubsup> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>I</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mover> <mi>p</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>Reference camera C0It is defined as with other N-1 cameras in the constant hypothesis of t structure tensor:<mrow> <msub> <mi>D</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>Z</mi> <mo>,</mo> <mi>V</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>o</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>T</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>Reference camera C0It is defined as with other N-1 cameras in the constant hypothesis of t+1 moment structures:<mrow> <msub> <mi>D</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>Z</mi> <mo>,</mo> <mi>V</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msubsup> <mi>o</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>I</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mover> <mi>p</mi> <mo>&OverBar;</mo> </mover> <mn>0</mn> </msub> <mo>,</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>T</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mover> <mi>p</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>,</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>In above-mentioned data item formulaFor penalty so that smoothed approximation is in L1Norm,It is Two-value blocks mask, is asked for by the Ouluding boundary region detection technology of stereo-picture, when pixel is to block a littleNon- screening Gear accounts forITIt is the local tensors such as formula of 2 dimension imagesIt is shown.
- 4. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step The design of the smooth item of scene flows energy functional described in rapid four specifically includes,Regularization directly is carried out to 3-dimensional flow field and depth information, it is smooth it is assumed that S to design a kind of stream driving anisotropymAnd S (V)d (Z) it is to enter row constraint to 3-dimensional flow field and depth information respectively, smooth item design formula is as follows:<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>S</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>V</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>u</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>x</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>u</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>y</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&psi;</mi> <msup> <mrow> <mo>|</mo> <mrow> <mi>v</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>x</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>v</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>y</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>w</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>x</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>&psi;</mi> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>w</mi> <msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mi>y</mi> </msub> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>Sd(Z)=ψ (| Z (x, y)x|2)+ψ (| Z (x, y)y|2)Whole scene flows estimated energy functional is as follows,
- 5. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step The cluster of scene flows moving region described in rapid six specifically includes, the scene flows V that will be obtained using clustering algorithm in step 5 (u, v, w) is clustered, and separating background and moving region, the characteristic information of scene flows specifically include:Each point scene flows u, v, w tri- Component, the modulus value of each point scene flows areEach point scene flows and xoy planes, xoz planes, The angle theta of yoz planesx, θy, θz, each put and represent V with 7 dimensional feature vectorsI, j=(u, v, w, | V |, θx, θy, θz);Idiographic flow is:Input is the similarity matrix S of the similarity composition of all N number of data points between any twoN×N, originate rank Section all regards all samples as potential cluster centre, then in order to find suitable cluster centre xk, constantly from number According to Attraction Degree r (i, k) and reliability a (i, k) is collected in sample, to formula Continuous iteration so as to update Attraction Degree and reliability, until M cluster centre point is produced, wherein r (i, k) is used for describing the degree for the cluster centre that point k is suitable as data point i, a (i, k) For describing appropriateness of the point i selected elements k as its cluster centre;For moving region, a mark flag is set, if moving region then flag=1, if background area then flag= 0, and it is count to count moving region pixel number, setting moving region is spatial neighborhood
- 6. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step Structure direction of motion dispersion degree assessment models described in rapid seven specifically include,Z axis of the regulation based on camera coordinate system is reference vector direction, then calculates each motion vector tried to achieve and reference The included angle in directionI, j(t), calculation formula is as follows:<mrow> <msub> <mi>&phi;</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>arccos</mi> <mfrac> <mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <msqrt> <mrow> <msup> <mrow> <mo>|</mo> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mfrac> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> </mrow> </mtd> <mtd> <mrow> <mi>f</mi> <mi>l</mi> <mi>a</mi> <mi>g</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>Calculate the φ of each frame moving region pixelI, j(t) variance D (φI, j(t)), whereinFor the equal of all angles Value,<mrow> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mo>&Sigma;</mo> <mover> <mi>&Omega;</mi> <mo>&OverBar;</mo> </mover> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>&phi;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>-</mo> <msub> <mover> <mi>&phi;</mi> <mo>&OverBar;</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mi>c</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> </mrow> </mfrac> <mo>.</mo> </mrow> 2
- 7. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:It is logical The kinetic energy of each frame moving region of the scene stream calculation calculated is crossed, it is as follows:<mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munder> <mo>&Sigma;</mo> <mover> <mi>&Omega;</mi> <mo>&OverBar;</mo> </mover> </munder> <mrow> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>v</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>|</mo> <mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>The mean kinetic energy of moving region is calculated by the total kinetic energy of each frame moving region<mrow> <mover> <mi>W</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>W</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>c</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> </mrow> </mfrac> <mo>.</mo> </mrow>
- 8. strenuous exercise's detection method according to claim 1 based on multi-view stereo vision scene flows, it is characterized in that:Step Suddenly with regard to middle set angle variance threshold values φthWith kinetic energy threshold value Wth, whenWhen, and continuously there are n frames to expire Sufficient two above condition, then it is judged as strenuous exercise and triggers alarm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710404056.7A CN107341815B (en) | 2017-06-01 | 2017-06-01 | Violent motion detection method based on multi-view stereoscopic vision scene stream |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710404056.7A CN107341815B (en) | 2017-06-01 | 2017-06-01 | Violent motion detection method based on multi-view stereoscopic vision scene stream |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107341815A true CN107341815A (en) | 2017-11-10 |
CN107341815B CN107341815B (en) | 2020-10-16 |
Family
ID=60221390
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710404056.7A Active CN107341815B (en) | 2017-06-01 | 2017-06-01 | Violent motion detection method based on multi-view stereoscopic vision scene stream |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107341815B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109726718A (en) * | 2019-01-03 | 2019-05-07 | 电子科技大学 | A kind of visual scene figure generation system and method based on relationship regularization |
CN109978968A (en) * | 2019-04-10 | 2019-07-05 | 广州虎牙信息科技有限公司 | Video rendering method, apparatus, equipment and the storage medium of Moving Objects |
CN110827328A (en) * | 2018-08-07 | 2020-02-21 | 三星电子株式会社 | Self-motion estimation method and device |
WO2020238008A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Moving object detection method and device, intelligent driving control method and device, medium, and apparatus |
CN112581494A (en) * | 2020-12-30 | 2021-03-30 | 南昌航空大学 | Binocular scene flow calculation method based on pyramid block matching |
CN112614151A (en) * | 2021-03-08 | 2021-04-06 | 浙江大华技术股份有限公司 | Motion event detection method, electronic device and computer-readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150063682A1 (en) * | 2012-05-17 | 2015-03-05 | The Regents Of The University Of California | Video disparity estimate space-time refinement method and codec |
CN104680544A (en) * | 2015-03-18 | 2015-06-03 | 哈尔滨工程大学 | Method for estimating variational scene flow based on three-dimensional flow field regularization |
CN106485675A (en) * | 2016-09-27 | 2017-03-08 | 哈尔滨工程大学 | A kind of scene flows method of estimation guiding anisotropy to smooth based on 3D local stiffness and depth map |
CN106504202A (en) * | 2016-09-27 | 2017-03-15 | 哈尔滨工程大学 | A kind of based on the non local smooth 3D scene flows methods of estimation of self adaptation |
-
2017
- 2017-06-01 CN CN201710404056.7A patent/CN107341815B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150063682A1 (en) * | 2012-05-17 | 2015-03-05 | The Regents Of The University Of California | Video disparity estimate space-time refinement method and codec |
CN104680544A (en) * | 2015-03-18 | 2015-06-03 | 哈尔滨工程大学 | Method for estimating variational scene flow based on three-dimensional flow field regularization |
CN106485675A (en) * | 2016-09-27 | 2017-03-08 | 哈尔滨工程大学 | A kind of scene flows method of estimation guiding anisotropy to smooth based on 3D local stiffness and depth map |
CN106504202A (en) * | 2016-09-27 | 2017-03-15 | 哈尔滨工程大学 | A kind of based on the non local smooth 3D scene flows methods of estimation of self adaptation |
Non-Patent Citations (2)
Title |
---|
FREDERIC HUGUET 等: "A Variational Method for Scene Flow Estimation from Stereo Sequences", 《2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
杨文康: "基于双目场景流的运动目标检测与跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827328A (en) * | 2018-08-07 | 2020-02-21 | 三星电子株式会社 | Self-motion estimation method and device |
CN109726718A (en) * | 2019-01-03 | 2019-05-07 | 电子科技大学 | A kind of visual scene figure generation system and method based on relationship regularization |
CN109726718B (en) * | 2019-01-03 | 2022-09-16 | 电子科技大学 | Visual scene graph generation system and method based on relation regularization |
CN109978968A (en) * | 2019-04-10 | 2019-07-05 | 广州虎牙信息科技有限公司 | Video rendering method, apparatus, equipment and the storage medium of Moving Objects |
WO2020238008A1 (en) * | 2019-05-29 | 2020-12-03 | 北京市商汤科技开发有限公司 | Moving object detection method and device, intelligent driving control method and device, medium, and apparatus |
CN112581494A (en) * | 2020-12-30 | 2021-03-30 | 南昌航空大学 | Binocular scene flow calculation method based on pyramid block matching |
CN112581494B (en) * | 2020-12-30 | 2023-05-02 | 南昌航空大学 | Binocular scene flow calculation method based on pyramid block matching |
CN112614151A (en) * | 2021-03-08 | 2021-04-06 | 浙江大华技术股份有限公司 | Motion event detection method, electronic device and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107341815B (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107341815A (en) | Strenuous exercise's detection method based on multi-view stereo vision scene flows | |
Chen et al. | Scale pyramid network for crowd counting | |
CN111598030B (en) | Method and system for detecting and segmenting vehicle in aerial image | |
Mancas et al. | Abnormal motion selection in crowds using bottom-up saliency | |
CN110929578B (en) | Anti-shielding pedestrian detection method based on attention mechanism | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN106845364B (en) | Rapid automatic target detection method | |
Derpanis et al. | Classification of traffic video based on a spatiotemporal orientation analysis | |
CN110929593A (en) | Real-time significance pedestrian detection method based on detail distinguishing and distinguishing | |
CN112215074A (en) | Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision | |
CN112990077B (en) | Face action unit identification method and device based on joint learning and optical flow estimation | |
CN105404857A (en) | Infrared-based night intelligent vehicle front pedestrian detection method | |
Gutoski et al. | Detection of video anomalies using convolutional autoencoders and one-class support vector machines | |
CN102034267A (en) | Three-dimensional reconstruction method of target based on attention | |
CN104680554B (en) | Compression tracking and system based on SURF | |
CN112766123B (en) | Crowd counting method and system based on criss-cross attention network | |
CN110910421A (en) | Weak and small moving object detection method based on block characterization and variable neighborhood clustering | |
Hu et al. | Parallel spatial-temporal convolutional neural networks for anomaly detection and location in crowded scenes | |
CN114049572A (en) | Detection method for identifying small target | |
CN115761563A (en) | River surface flow velocity calculation method and system based on optical flow measurement and calculation | |
CN111339934A (en) | Human head detection method integrating image preprocessing and deep learning target detection | |
CN114037684A (en) | Defect detection method based on yolov5 and attention mechanism model | |
Hua et al. | Background extraction using random walk image fusion | |
CN113436130A (en) | Intelligent sensing system and device for unstructured light field | |
CN105303544A (en) | Video splicing method based on minimum boundary distance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |