CN104301735B - The overall situation coding method of urban transportation monitor video and system - Google Patents
The overall situation coding method of urban transportation monitor video and system Download PDFInfo
- Publication number
- CN104301735B CN104301735B CN201410616965.3A CN201410616965A CN104301735B CN 104301735 B CN104301735 B CN 104301735B CN 201410616965 A CN201410616965 A CN 201410616965A CN 104301735 B CN104301735 B CN 104301735B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- global
- global motion
- parameter
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a kind of global coding method of urban transportation monitor video and system, including step:Step 1, original monitor video is divided into automobile video frequency and removes the video of vehicle;Step 2, the video for removing vehicle is encoded using optional differential coding mode;Step 3, the global characteristics parameter set of global motion vehicle in automobile video frequency is extracted;Step 4, global coding is carried out to automobile video frequency based on global characteristics parameter.Global redundancy of the present invention in monitor video is further eliminated on the basis of removing scene redundancy, effectively increases urban transportation monitor video coding compression efficiency.
Description
Technical field
The invention belongs to urban transportation monitor video coding techniques field, more particularly to a kind of urban transportation monitor video are complete
Office's coding method and system.
Background technology
The target of video signal compression coding techniques is on the premise of certain reconstruction quality is ensured, with few bit of trying one's best
Count to characterize video information.Traditional method for video coding based on Shannon information theory is started with from signal transacting aspect, with pixel, block
To represent basis, the hybrid encoding frame merged using conversion, prediction, entropy code, by excavating image video signal itself
Spatio-temporal redundancies improve compression performance.But most of video compression technologies are all towards unspecific application, in recent years, pin at present
The characteristics of to specialized application (such as monitor video) and demand and the video compression technology developed turns into the research side received much concern
To monitor video coding and transmission technology under such as urban traffic environment.AVS-S2 remains unchanged for a long period of time for monitor video scene
The characteristics of, by being modeled to monitoring background and prospect, optionally each piece is carried out using raw mode and difference modes
Coding, eliminates " the scene redundancy " largely existed, code efficiency is twice H.264/AVC, is first facing video monitoring
International standard.But AVS-S2 can not be removed because global object moves " the global redundancy " that produces, compression efficiency lifting is limited,
Contradiction between data volume and memory capacity is still very prominent
The vehicle of different styles has the similitude of video texture characteristic in monitor video, and the vehicle of same style has 3D
The homogeneity of object, same chassis then has the Long-term stability of external appearance characteristic.With similar, same, Long-term stability each
The monitoring camera camera lens that class city operations vehicle is spread all over city various regions is captured repeatedly, so as to generate substantial amounts of supervision of the cities number
According to redundancy.Supervision of the cities point is set mostly in covering state is owed, and the data that vehicle and personnel's movement are produced constitute city prison
Control the main source of data.Same moving vehicle is shot with video-corder the video monitoring of generation repeatedly under metropolitan area magnanimity monitoring camera
Data redundancy is referred to as global redundancy.Exist between different motion object and there is body one between texture paging, same class semantic object
Similitude when having long between cause property, special object, generates the global redundancy of substantial amounts of Moving Objects.Traditional Video coding and field
What scape redundancy removal technology was removed is local space time's redundancy, and shoots with video-corder production when nose heave multiple long because vehicle is imaged in monitor video
Raw global redundancy provides huge space for the further lifting of video compression efficiency.
The content of the invention
In view of the deficienciess of the prior art, being regarded the invention provides a kind of urban transportation monitoring for considering global redundancy
The global coding method of frequency and system, this method can further improve the code efficiency of urban transportation monitor video.
In order to solve the above technical problems, the present invention is adopted the following technical scheme that:
(1) a kind of global coding method of urban transportation monitor video, including step:
Step 1, original monitor video is divided into automobile video frequency and removes the video of vehicle;
Step 2, the video for removing vehicle is encoded using optional differential coding mode;
Step 3, the global characteristics parameter set of global motion vehicle in automobile video frequency is extracted, is further comprised:
S31 extracts the 2D external appearance characteristics of global motion vehicle;
S32 builds vehicle 3D model databases, the general 3D models of vehicle 3D model databases including various types of vehicles, fine
3D models and the crucial characterising parameter collection of model, model key characterising parameter are obtained by model characterising parameter collection dimensionality reduction;
S33 builds the global vehicle texture dictionary of global motion vehicle using sparse coding mode, further comprises:
WithFor cost function, the texture letter based on global motion vehicle
Breath, obtains first layer knowledge dictionary, i.e., the general character visual texture information knowledge dictionary of all class global motion vehicles;
The difference after all kinds of global motion vehicles are rebuild through first layer knowledge dictionary with original global operations vehicle is obtained to believe
Cease rc, withFor cost function, based on different information rc, obtain the second layer
Knowledge dictionary, i.e., the three-dimensional structure and texture individual information knowledge dictionary of all kinds of global motion vehicles;
Obtain under all kinds of global motion vehicles after each individual global motion vehicle is rebuild through second layer knowledge dictionary with it is original
The different information r of global motion vehiclec,m, withFor cost letter
Number, based on different information rc,m, third layer knowledge dictionary is obtained, i.e., fresh information is known when the individual character of individual global motion vehicle is long
Character learning allusion quotation;
It is above-mentioned, D1Represent first layer knowledge dictionary;C represents the number of types of global motion vehicle, and c represents global motion vehicle
Type number;ycRepresent all types global motion vehicle texture information;a1Presentation code coefficient;τ is balance factor, according to reality
Border situation and experience setting, τ is bigger, and code coefficient is more sparse;Second layer knowledge dictionary is represented, M represents certain class global motion
Individual vehicle fleet size under vehicle, m represents individual car number, a under certain class global motion vehicle2,cPresentation code coefficient;Table
Show third layer knowledge dictionary, N represents the individual vehicle quantity under the type global motion vehicle, and i represents the type global motion
Individual vehicle numbering under vehicle;a3,c,mPresentation code coefficient;
Model in the 2D external appearance characteristics of global motion vehicle and vehicle 3D model databases is carried out characteristic matching by S34, is obtained
Obtain the crucial characterising parameter information of texture and model of global motion vehicle;
S35 extracts global motion vehicle position information according to the 2D external appearance characteristics of global motion vehicle, and combines corresponding
Attitude information in model key characterising parameter information constitutes position and the attitude parameter of global motion vehicle;
S36 carries out Lossless Compression to global characteristics parameter set and the corresponding code coefficient of three-level knowledge dictionary, and described is complete
Office's characteristic parameter collection is by the step S34 textures obtained and the crucial characterising parameter information of model and step the S35 position obtained and appearance
State parameter is constituted;
Step 4, global coding is carried out to automobile video frequency based on global characteristics parameter.
Original monitor video is divided into automobile video frequency using background modeling and vehicle testing techniques in step 1 and car is removed
Video, specifically include:
S11 changes original monitor video image to yuv space, and the background mould automatically updated is set up based on background subtraction
Type;
S12 detects vehicle in original monitor video image using vehicle detection method, obtains automobile video frequency image;
Original monitor video image is subtracted automobile video frequency image by S13, obtains regarding for the removal vehicle comprising background cavity
Frequency image;
Background cavity, which is overlapped, in the video image that S14 is obtained using background model to S13 fills up, and obtains and removes vehicle
Video image.
Step 2 further comprises sub-step:
S21 generates background image, encoded rear reconstructed background image according to the video image for removing vehicle;
S22 carries out overall motion estimation to the video image for removing vehicle, obtains global motion vector;
S23 is based on reconstructed background image and global motion vector, optionally using original coding pattern or differential coding
Pattern is encoded to each video block.
Sub-step S32 further comprises:
(1) the automotive universal 3D models based on network are built;
(2) the fine 3D models of vehicle are obtained;
(3) 3D model characterising parameter collection is obtained according to automotive universal 3D models;
(4) dimensionality reduction is carried out to 3D model characterising parameters lumped parameter, obtains crucial characterising parameter.
Sub-step S35 further comprises:
(1) position and angle parameter ρ=[x, y, the θ] of global motion vehicle are determinedT, x, y are global motion vehicle center
In the upright projection coordinate of world coordinate system, θ is the angle of global motion vehicle direction of primary motion and OX axles;
(2) moving region in automobile video frequency is extracted by background modeling;
(3) two-dimensional motion vector of global motion vehicle is obtained using sparse optical flow method;
(4) direction of primary motion θ and speed v of the global motion vehicle in world coordinate system are obtained;
(5) by the two-dimensional projection of the general 3D models of global motion vehicle match and the size and shape iteration of moving region
Matching, obtains location parameter (x, y) of the global motion vehicle under world coordinate system.
Step 4 further comprises sub-step:
S41 is based on global vehicle characteristics parameter and carries out Motion estimation and compensation to global motion vehicle, obtains residual error
Parameter information;
S42 obtains the illumination compensation parameters of global motion vehicle;
S43 merges residual error parameter information and illumination compensation parameters, and carries out nothing to residual parameter information and illumination compensation parameters
Damage coding.
(2) the global coded system of a kind of urban transportation monitor video, including:
(1) Video segmentation module, for original monitor video is divided into automobile video frequency and the video of vehicle is removed;
(2) optional differential coding module, for being encoded using optional differential coding mode to the video for removing vehicle;
(3) global characteristics parameter extraction module, for extracting the global characteristics parameter of global motion vehicle in automobile video frequency
Collection, this module further comprises submodule:
2D external appearance characteristic extraction modules, for extracting the 2D external appearance characteristics of global motion vehicle;
Vehicle 3D model databases build module, for building vehicle 3D model databases, vehicle 3D model database bags
The crucial characterising parameter collection of general 3D models, fine 3D models and model of various types of vehicles is included, model key characterising parameter is by model
Characterising parameter collection dimensionality reduction is obtained;
Global vehicle texture dictionary builds module, for building the global car of global motion vehicle using sparse coding mode
Texture dictionary, further comprises:
WithFor cost function, the texture letter based on global motion vehicle
Breath, obtains first layer knowledge dictionary, i.e., the general character visual texture information knowledge dictionary of all class global motion vehicles;
The difference after all kinds of global motion vehicles are rebuild through first layer knowledge dictionary with original global operations vehicle is obtained to believe
Cease rc, withFor cost function, based on different information rc, obtain the second layer
Knowledge dictionary, i.e., the three-dimensional structure and texture individual information knowledge dictionary of all kinds of global motion vehicles;
Obtain under all kinds of global motion vehicles after each individual global motion vehicle is rebuild through second layer knowledge dictionary with it is original
The different information r of global motion vehiclec,m, withFor cost letter
Number, based on different information rc,m, third layer knowledge dictionary is obtained, i.e., fresh information is known when the individual character of individual global motion vehicle is long
Character learning allusion quotation;
It is above-mentioned, D1Represent first layer knowledge dictionary;C represents the number of types of global motion vehicle, and c represents global motion vehicle
Type number;ycRepresent all types global motion vehicle texture information;a1Presentation code coefficient;τ is balance factor, according to reality
Border situation and experience setting, τ is bigger, and code coefficient is more sparse;Second layer knowledge dictionary is represented, M represents certain class global motion
Individual vehicle fleet size under vehicle, m represents individual car number, a under certain class global motion vehicle2,cPresentation code coefficient;Table
Show third layer knowledge dictionary, N represents the individual vehicle quantity under the type global motion vehicle, and i represents the type global motion
Individual vehicle numbering under vehicle;a3,c,mPresentation code coefficient;
The crucial characterising parameter data obtaining module of texture and model, for by the 2D external appearance characteristics and car of global motion vehicle
Model carries out characteristic matching in 3D model databases, obtains the crucial characterising parameter letter of texture and model of global motion vehicle
Breath;
Position and attitude parameter acquisition module, for extracting global motion car according to the 2D external appearance characteristics of global motion vehicle
Positional information, and the attitude information combined in the crucial characterising parameter information of corresponding model constitutes the position of global motion vehicle
And attitude parameter;
Lossless compression modules are lossless for being carried out to global characteristics parameter set and the corresponding code coefficient of three-level knowledge dictionary
Compression, described global characteristics parameter set is obtained by the step S34 textures obtained and the crucial characterising parameter information of model and step S35
The position obtained and attitude parameter are constituted;
(4) global coding module, for carrying out global coding to automobile video frequency based on global characteristics parameter.
Above-mentioned position and attitude parameter acquisition module further comprise:
Position and angle parameter determining module, for determining position and angle parameter ρ=[x, y, the θ of global motion vehicle
]T, x, y are upright projection coordinate of the global motion vehicle center in world coordinate system, and θ is global motion vehicle direction of primary motion
With the angle of OX axles;
Acquiring motion area module, for extracting the moving region in automobile video frequency by background modeling;
Two-dimensional motion vector obtains module, for obtaining the two dimensional motion arrow of global motion vehicle using sparse optical flow method
Amount;
Direction of primary motion obtains module with speed, for obtaining direction of primary motion of the global motion vehicle in world coordinate system
θ and speed v;
Location parameter obtains module, for by the two-dimensional projection of the general 3D models of global motion vehicle match and motor area
The size and shape Iterative matching in domain, obtains location parameter (x, y) of the global motion vehicle under world coordinate system.
Above-mentioned global coding module further comprises:
Residual error parameter information obtains module, estimates for carrying out motion to global motion vehicle based on global vehicle characteristics parameter
Meter and motion compensation, obtain residual error parameter information;
Illumination compensation parameters acquisition module, for obtaining the illumination compensation parameters of global motion vehicle;
Lossless coding module, for merging residual error parameter information and illumination compensation parameters, and to residual parameter information and illumination
Compensating parameter is reversibly encoded.
The mechanism that the present invention is produced based on global object's redundancy in supervision of the cities video, utilizes information of vehicles in monitor video
Accounting is larger, vehicle structure is strong, outward appearance is similar, texture-rich the characteristics of, by vehicle testing techniques to original video carry out
Video after segmentation, generation automobile video frequency and removal vehicle, is encoded respectively in different ways.It is logical for automobile video frequency
Cross sparse coding technology etc. and set up vehicle knowledge dictionary, extraction obtains global characteristics parameter set, only by fortune during due to coding
The features such as texture, the posture of motor-car are described, and the video data of global motion vehicle is transformed into and only includes a small amount of information
Feature data are described, effectively eliminate the global redundancy of moving vehicle;And for removing video (including the Background after vehicle
Picture and other Moving Objects) then encoded by the way of based on the optional differential codings of AVS-S2.The present invention is removing scene
The global redundancy in monitor video is further eliminated on the basis of redundancy, coding compression efficiency is effectively increased.
Brief description of the drawings
Fig. 1 is the particular flow sheet of the inventive method;
Fig. 2 is the particular flow sheet of Video segmentation;
Fig. 3 is the particular flow sheet of vehicle detection method;
Fig. 4 is the particular flow sheet for extracting global characteristics parameter;
Fig. 5 is vehicle 2D external appearance characteristic schematic diagrames, wherein, figure a is the 3D illustratons of model of vehicle, and figure b is vehicle 2D templates
Sampling;
Fig. 6 is automotive universal 3D models and the fine 3D model schematics of vehicle, wherein, figure a is automotive universal 3D models, figure
B is the texture schematic diagram of automotive universal 3D models, and figure c is the fine 3D model schematics of vehicle;
Fig. 7 is that moving vehicle posture position describes schematic diagram with space angle parameter;
Fig. 8 is that vehicle location and attitude parameter extract schematic diagram;
Fig. 9 is automobile video frequency overall situation coding schematic flow sheet.
Embodiment
To enable the object of the invention, technical characteristic and advantage more obvious understandable, below in conjunction with the accompanying drawings and specific implementation
The invention will be further described for mode.
Monitoring scene is more fixed in urban transportation monitor video, and information of vehicles accounting is larger, and vehicle movement is produced greatly
The global redundancy of amount.These characteristics and vehicle structure based on urban transportation monitor video are strong, outward appearance is similar, texture-rich spy
Point, is compiled the invention provides a kind of global coding method of urban transportation monitor video and system to urban transportation monitor video
Code, to remove monitor video Scene redundancy and global redundancy.
Original monitor video first, is divided into automobile video frequency by video dividing technique and removes regarding for vehicle by the present invention
Frequently.Then, to the video of removal vehicle, the scene removed using the optional differential coding mode based on AVS-S2 in the video is superfluous
It is remaining.Then, to automobile video frequency, using sparse coding technology create vehicle knowledge dictionary, and then generate include vehicle position and
The global characteristics parameter set of attitude information and texture and parameter information, global coding is carried out to global characteristics parameter;During coding
Only it is described by features such as the texture to moving vehicle, postures, the video data of global motion vehicle is transformed into and only wrapped
Feature containing a small amount of information describes data, the global redundancy of moving vehicle is effectively eliminated, so as to further lift code efficiency.
Fig. 1 be the inventive method particular flow sheet, reference picture 1, the inventive method is comprised the following steps that:
Step 1, Video segmentation:Original monitor video is divided into automobile video frequency and the video of vehicle is removed.
Video segmentation can be realized by background modeling and vehicle testing techniques, by original monitor video be divided into automobile video frequency and
Remove video two parts of vehicle.
See Fig. 2, this step is carried out for video image, further comprises sub-step:
S11 changes original monitor video image to yuv space, and the background mould automatically updated is set up based on background subtraction
Type.
In this specific implementation, the foundation of background model is realized using ViBe methods (visual background extracting method), but background
The modeling method of model is not limited to ViBe methods.
S12 detects vehicle in original monitor video image using vehicle detection method, and obtains automobile video frequency image.
The specific implementation flow of this step is shown in Fig. 3, including step:
(1) gaussian filtering is carried out to original monitor video image, moving region is detected using background model.Gaussian filtering is used
To eliminate Gaussian noise in image, improve picture quality, be further ensured that the correctness of subsequent video processing.
(2) training sample is chosen, the SIFT feature (scale invariant feature conversion) of training sample is extracted, and used
Adaboost classifier training wagon detectors;Described training sample is a series of original monitor video images.
(3) SIFT feature of moving region is classified using the wagon detector trained, if belonging in moving region
The ratio for accounting for total SIFT feature in the SIFT feature of vehicle is more than threshold value R, then judges this moving region for vehicle;Otherwise, it is non-
Vehicle region.Threshold value R is rule of thumb set.
Preferably, after moving region is obtained, in addition it is also necessary to remove the motion shade of moving region.
Original monitor video image is subtracted automobile video frequency image by S13, obtains regarding for the removal vehicle comprising background cavity
Frequency image.
Background cavity, which is overlapped, in the video image that S14 is obtained using background model to S13 fills up, and obtains and removes vehicle
Video image.
Step 2, the video for removing vehicle is handled using optional differential coding mode.
Optional differential coding mode is the routine techniques of technical field of video coding, i.e., in conventional hybrid coding standard scheme
On the basis of (as H.264), increase the predictive coding based on background frames, expand two kinds of reference background prediction and difference prediction
Coding prediction mode.When module to be encoded is background block, is then predicted by reference background and make it that residual error is smaller;If block to be encoded is
Preceding background mixed block, then using difference prediction pattern, i.e., be predicted using wiping out the foreground part after background;And pure prospect
Block continues using traditional neighbor prediction pattern.In principle, preceding background mixed block can also be using traditional neighbor prediction pattern or difference
Divide predictive mode.
In this specific implementation, the video for removing vehicle is compiled using the optional differential coding mode based on AVS-S2
Code.The characteristics of optional differential coding mode based on AVS-S2 remains unchanged for a long period of time for monitor video scene, by monitoring background
It is modeled with prospect, to remove " the scene redundancy " that largely exists, code efficiency is twice of H.264/AVC coded system.
Optional differential coding mode based on AVS-S2 for each P frames macro block, except using in addition to existing coded system, it is also an option that property
Use " difference result of nearest reference frame and background image " come to " the corresponding background difference result of current macro " carry out
Predictive coding.
This step is encoded for removing the video of vehicle, further comprises following sub-step:
S21 background modelings:
Use video image modeling generation background image, encoded rear reconstructed background image.
S22 overall motion estimations:
Pixel or the overall motion estimation of sub-pixel precision are carried out to the video image for removing vehicle, global motion arrow is obtained
Amount.
S23 coding modes are selected:
Based on reconstructed background image and global motion vector, optionally using original coding pattern or differential coding mode
Each video block is encoded.Background block is predicted by reference background and make it that residual error is smaller;To preceding background mixed block, using difference
Divide predictive mode, the foreground part fallen using subduction after background is predicted;Pure foreground blocks are then continued with pre- using traditional neighbour
Survey pattern.
Step 3, the global characteristics parameter of global motion vehicle in automobile video frequency is extracted.
See Fig. 4, this step further comprises sub-step:
S31 extracts the 2D external appearance characteristics of global motion vehicle based on automobile video frequency.
Its 2D external appearance characteristic of each global motion vehicle extraction in the automobile video frequency obtained to step 1, embodiment is such as
Under:To a certain style vehicle, its 2D image outline under different points of view is precalculated.For example, to certain sedans, at 360 degree
72 sections are quantified as in the range of direction of traffic, 19 sections are quantified as in 90 degree of elevation coverage, 1368 2D are provided altogether
Shape template.Fig. 5 is vehicle 2D external appearance characteristic schematic diagrames.Wherein, figure a is car 3D illustratons of model;Scheme 2Ds of the b for car in figure a
The sampling of template, the 1st~3 row represents the 2D templates when camera elevation angle is 0 degree, 15 degree and 30 degree respectively, and the 1st~4 row are respectively
2D templates when direction of traffic is 0 degree, 30 degree, 90 degree and 120 degree.
The foundation of S32 vehicle 3D model databases.
When setting up vehicle 3D model databases, vehicle is classified according to vehicle brand and model.
The general 3D models and fine 3D models of vehicle are set up, that is, constitutes vehicle 3D model databases.Vehicle 3D models are by 5
Individual major part composition:Vehicle body main body and 4 wheels, specific to be built using CAD model, vehicle 3D models are by network group
Into, and store each mesh vertex coordinates and grid surface index.The components such as vehicle window, car light are due to its identifiability and ga s safety degree
Height, acts on important, the class component is referred to as the key component of vehicle, in vehicle in description vehicle characteristics and differentiation vehicle model
The expression of different the level of details is used in the general fine 3D models of 3D models and vehicle to key component.
The embodiment of this step is as follows:
(1) automotive universal 3D models are set up
Automotive universal 3D models are represented using quadrilateral mesh, Fig. 6 (a) is seen, the figure is the general 3D moulds of Audi's Q7 vehicles
Type.Quadrilateral mesh has a succinct generalization, quadrilateral mesh border not fully with vehicle key component overlapping margins, because
This, vehicle key component is represented using the two-dimensional closed line being attached on model.Institute is shown on Fig. 6 (a) texture schematic diagram
There is the contour line of key component, see Fig. 6 (b).
(2) the fine 3D models of vehicle are obtained
Before each model vehicle release, its fine 3D model based on CAD has just been present, can be under related web site progress
Carry.In fine 3D models, to improve the identifiability of vehicle model, vehicle key component is not represented only with contour line, is also protected
Stay the external appearance characteristic of each part.Fig. 6 (c) illustrates fine grid blockses 3D models and its key component including Audi's Q7 vehicles.
(3) general 3D model parameters are based on by global motion vehicle progress in automotive universal 3D models and automobile video frequency
Match somebody with somebody.
General 3D models characterising parameter collection is obtained based on general 3D models, automotive universal 3D models are realized based on characterising parameter
Matched with global motion vehicle in automobile video frequency, and by adjusting general 3D models characterising parameter realize automotive universal 3D models
It is adapted to the optimal of global motion vehicle in automobile video frequency, automotive universal 3D models belong to this skill with matching for global motion vehicle
Routine techniques in art field.In this specific implementation, the wheelbase of general 3D models characterising parameter including vehicle, headstock width, draw
Hold up lid height and wait 30 parameters.
, i.e., will be with overall situation fortune when video image recovers by matching for automotive universal 3D models and global motion vehicle
The general 3D auto models of dynamic vehicle match are positioned over the corresponding actual position of the global motion vehicle of this in video.
(4) dimensionality reduction of general 3D models characterising parameter
Use PCA (PCA) 3D models characterising parameter general to 30 to carry out dimensionality reduction and joined with obtaining crucial description
Number.Number of parameters is few, calculates simple, high to noise and the fitness of low-quality;But number of parameters is more, model is expressed Vehicle Detail
Degree is high, and high with actual vehicle matching degree, so needing to be balanced between the two.Leotta is drawn by experiment:
Preceding 6 PCA principal components just can preferably express auto model, while effectively reducing amount of calculation.So the general 3D models after dimensionality reduction
Characterising parameter, i.e. key characterising parameter p=[p1,p2,p3,p4,p5,p6]T。
The structure of S33 overall situation vehicle texture dictionaries
The effect for building global vehicle texture dictionary is put when by the general 3D auto models with global motion vehicle match
It is placed in video after corresponding actual position, texture weight can be carried out to general 3D auto models according to global vehicle texture dictionary
Build.
According to global vehicle extraction and recognition result, using sparse coding mode to each global motion vehicle in automobile video frequency
Global vehicle texture dictionary is built, global vehicle texture dictionary collectively forms vehicle knowledge dictionary with vehicle 3D model databases.
This step first, builds the general character visual texture information knowledge storehouse of global vehicle;Then, all types of vehicles are built
Three-dimensional structure and texture individual information knowledge dictionary;Finally, fresh information knowledge dictionary when the individual character of structure individual vehicle is long.Compile
Only it is described during code by features such as the texture to moving vehicle, postures, the video data of global motion vehicle is transformed into
Only the feature comprising a small amount of information describes data, effectively removes the global redundancy of moving vehicle, further lifts code efficiency.
The embodiment that three layers of knowledge dictionary of this step are built is as follows:
(1) first layer knowledge dictionary is built, i.e. the general character visual texture information knowledge word of all types global motion vehicle
The structure of allusion quotation.
The general character visual texture information knowledge dictionary of all types moving vehicle, cost letter are built by sparse coding mode
Number is as follows:
Wherein, D1Represent first layer knowledge dictionary;C represents the number of types of global motion vehicle, and c represents global motion vehicle
Type number;ycRepresent the general character visual texture information of global motion vehicle, the i.e. texture of all types global motion vehicle
Information;a1Presentation code coefficient;τ is balance factor, is set according to actual conditions and experience, τ is bigger, and code coefficient is more sparse.
Cost function (1) is used for calculating degree of rarefication, over-fitting is prevented by bound term, while code coefficient can also be reduced
In non-zero element.
(2) second layer knowledge dictionary is built, i.e., the three-dimensional structure and texture individual information of all types of global motion vehicles are known
The structure for allusion quotation of becoming literate.
The three-dimensional structure and texture individual information knowledge word of all types of global motion vehicles are built by sparse coding mode
Allusion quotation, be specially:
First, the difference after all kinds of global motion vehicles are rebuild through first layer knowledge dictionary with original global motion vehicle is extracted
Different information rc:
Secondly, second layer knowledge dictionary is built respectively for all types of global motion vehicles, cost function is as follows:
Wherein,Second layer knowledge dictionary is represented, M represents the individual vehicle quantity under certain type global motion vehicle, m
Represent individual car number, a under certain type global motion vehicle2,cPresentation code coefficient.
(3) third layer knowledge dictionary is built, i.e., fresh information knowledge dictionary when the individual character of individual global motion vehicle is long.
Fresh information knowledge dictionary when the individual character that individual movement vehicle is built by sparse coding mode is long, be specially:
First, with the original overall situation after each individual vehicle is rebuild through second layer knowledge dictionary under all kinds of global motion vehicles of extraction
Run the different information r of vehiclec,m:
Secondly, third layer knowledge dictionary, cost function are built respectively for individual vehicle in all types of global motion vehicles
It is as follows:
Wherein,Third layer knowledge dictionary is represented, N represents the individual vehicle quantity under the type global motion object, i
Represent the individual vehicle numbering under the type Moving Objects;a3,c,mPresentation code coefficient.
Automobile video frequency coding is handled by the global vehicle characteristics expression based on three layers of knowledge dictionary.On the one hand, due to three
Layer knowledge dictionary is slowly varying with the time, and the video data of global motion vehicle can be transformed into the spy for only including a small amount of information
Description information is levied, characterization information only includes a small amount of information of global motion object video data;On the other hand, it is public in city
Altogether in safety applications, because monitoring camera position is relatively fixed, background information is relatively fixed, and therefore, it can reduction background letter
Transmission frequency is ceased, such as 100 frames pass 1 background frame information, then, a small amount of feature description letter need to be only transmitted per frame background information
Breath, can complete to rebuild, so that the code efficiency of video big data is substantially improved in decoding end using three layers of knowledge dictionary.
S34 obtains the texture and model parameter information of global motion vehicle.
The 2D external appearance characteristics of global motion vehicle and vehicle 3D models are subjected to characteristic matching, global motion vehicle is obtained
Texture and model parameter information.
Characteristic matching is comprised the following steps that:
(1) it is based on vehicle 3D model databases, pre-generatmg and sets up the 2D after all viewpoints of vehicle quantify (see step S31)
The index of orthogonal outward appearance mask;
(2) to the vehicle rectangular area detected, by matching and the matching of profile based on region, parameter is selected in assessment
(type, inclination angle, direction) is to match foreground model.
S35 extracts global motion vehicle position information according to global motion vehicle 2D external appearance characteristics, and combines vehicle 3D moulds
Corresponding vehicle-posture information description constitutes position and the attitude parameter of global motion vehicle in type.
This step further comprises sub-step:
(1) characteristic information of each global motion vehicle in automobile video frequency is extracted.
The characteristic information F of i-th of global motion vehicleiIncluding locus (x, y, z) and attitude angle (α, β, γ) six
Individual parameter, is shown in Fig. 8:
Fi=[xi,yi,zi,αi,βi,γi] (6)
The first step, parameter is determined:
It is assumed that static monitoring camera meets perspective projection principle, the calibration of camera and the ground level parameter of shooting area
Realize offline pretreatment.Under normal circumstances, targeted attitude using 3 location parameters (x, y, z) and 3 angle parameters (α, β,
γ) it is described.But under vehicle Run-time scenario, it is believed that vehicle is mainly run along ground level, utilize ground plane constraint, car
Attitude parameter ρ can about be kept to 3:
ρ=[x, y, θ]T (6)
Wherein, x, y are that upright projection of the vehicle center on the world coordinate system (WCS) using ground level as XOY plane is sat
Mark, θ is the angle of vehicle direction of primary motion and OX axles.
Second step, obtains the attitude parameter of global motion vehicle:
This step mainly includes the vehicle attitude parameter initialization based on optical flow method and the vehicle attitude based on predicting tracing
Parameter updates.First, moving region is extracted by background modeling based on vehicle 3D models;Then, calculated using sparse optical flow method
The two-dimensional motion vector of vehicle, direction of primary motion θ of the vehicle under world coordinate system and speed are obtained with reference to camera calibration result
Spend v;Finally, by the two-dimensional projection of vehicle 3D models and the size and shape Iterative matching of moving region, vehicle is obtained in the world
Location parameter (x, y) under coordinate system.
(2) index information of primitive in three-level knowledge dictionary when rebuilding global motion vehicle, i.e. coding in dictionary are extracted
Coefficient vector IDi=[a1,a2,c,a3.c.m]。
(3) during coding transmission, to characteristic information FiWith code coefficient vector IDiLossless Compression is carried out, can specifically be compiled using entropy
Code carry out Lossless Compression, can effective guarantee global motion vehicle key message.
The texture that the position for the vehicle that step S35 is obtained and attitude parameter and step S34 are obtained is total to model parameter information
Isomorphism carries out Lossless Compression into global vehicle characteristics parameter set to global vehicle characteristics parameter set.
Step 4 is based on global vehicle characteristics parameter and global coding is carried out to automobile video frequency.
The idiographic flow of this step is shown in Fig. 9, comprises the following steps that:
S41 is based on global characteristics parameter and carries out Motion estimation and compensation to global motion vehicle, obtains residual error parameter
Information.
Method for estimating and variable-sized block motion compensation side using the pixel precision of multi-frame-reference image, 1/4 or 1/8
Method, specifically can using based on MPEG-1/2/4, H.263, H.264/AVC, H.265/HEVC or AVS method for estimating and fortune
Dynamic compensation method, but not limited to this.
S42 obtains the illumination compensation parameters of global motion vehicle.
Because illumination variation factor is larger on coding and the recovery effects influence of automobile video frequency, present embodiment is used
Following method determines illumination compensation parameters:
(1) the UV components based on YUV color spaces set up color invariant features, and obtain various vehicle samples from automobile video frequency
This composition sample set.
(2) N number of sample point is extracted in random sampling from sample set, and N is typically set at the 1/4 of sample set quantity, N preferably
>50。
(3) illumination compensation parameters are obtained based on sample point.
After being screened by color characteristic to sample point obtain, also can or assemblage characteristic single by other sieved
Choosing, such as Gradient Features, wavelet character, other any methods by characteristics of image the selection result acquisition illumination compensation parameters are all
It should be included in the scope of the present invention.
S43 merges residual error parameter information and illumination compensation parameters, and is mended residual error parameter information and illumination using entropy code
Repay parameter and be sent to decoding end, to recover to video image.
Residual error parameter information and illumination compensation parameters are merged, using entropy code to the residual error parameter information after fusion
It is reversibly encoded with illumination compensation parameters.Entropy code can use traditional variable-length encoding and algorithm coding, and embodiment includes
But being not limited to CAVLC (Variable Length Code based on context-adaptive), CABAC, (adaptive binary based on context is calculated
Art entropy code), C2DVLC (two-dimensional variable length coding) based on context-adaptive and CBAC (the binary algorithm volumes based on context
Code).
The characteristics of present invention is fixed using supervision of the cities video Scene, there is a large amount of global redundancies, by video car
Object and remainder are compressed processing respectively, more to eliminate the redundancy in video sequence, obtain preferably pressure
Contracting performance.
Specific embodiment described herein is only to spirit explanation for example of the invention.Technology neck belonging to of the invention
The technical staff in domain can be made various modification or supplement to described specific embodiment or be substituted using similar mode, but
Spirit without departing from the present invention surmounts scope defined in appended claims.
Claims (7)
1. the global coding method of a kind of urban transportation monitor video, it is characterised in that including step:
Step 1, original monitor video is divided into automobile video frequency and removes the video of vehicle;
Step 2, the video for removing vehicle is encoded using optional differential coding mode;
Step 3, the global characteristics parameter set of global motion vehicle in automobile video frequency is extracted, is further comprised:
S31 extracts the 2D external appearance characteristics of global motion vehicle;
S32 builds vehicle 3D model databases, and vehicle 3D model databases include general 3D models, the fine 3D moulds of various types of vehicles
Type and the crucial characterising parameter collection of model, model key characterising parameter are obtained by model characterising parameter collection dimensionality reduction;
S33 builds the global vehicle texture dictionary of global motion vehicle using sparse coding mode, further comprises:
WithFor cost function, based on the texture information of global motion vehicle, obtain
Obtain first layer knowledge dictionary, i.e., the general character visual texture information knowledge dictionary of all class global motion vehicles;
Obtain the different information r with original global operations vehicle after all kinds of global motion vehicles are rebuild through first layer knowledge dictionaryc,
WithFor cost function, based on different information rc, obtain second layer knowledge
Dictionary, i.e., the three-dimensional structure and texture individual information knowledge dictionary of all kinds of global motion vehicles;
Obtain under all kinds of global motion vehicles after each individual global motion vehicle is rebuild through second layer knowledge dictionary with the original overall situation
The different information r of moving vehiclec,m, withFor cost function, base
In different information rc,m, obtain third layer knowledge dictionary, i.e., fresh information knowledge word when the individual character of individual global motion vehicle is long
Allusion quotation;
It is above-mentioned, D1Represent first layer knowledge dictionary;C represents the number of types of global motion vehicle, and c represents global motion type of vehicle
Numbering;ycRepresent all types global motion vehicle texture information;a1Presentation code coefficient;τ is balance factor, according to actual feelings
Condition and experience setting, τ is bigger, and code coefficient is more sparse;Second layer knowledge dictionary is represented, M is represented under certain class global motion vehicle
Individual vehicle quantity, m represents individual car number, a under certain class global motion vehicle2,cPresentation code coefficient;Represent the 3rd
Layer knowledge dictionary, N represents the individual vehicle quantity under the type global motion vehicle, and i is represented under the type global motion vehicle
Individual vehicle numbering;a3,c,mPresentation code coefficient;
Model in the 2D external appearance characteristics of global motion vehicle and vehicle 3D model databases is carried out characteristic matching by S34, obtains complete
The crucial characterising parameter information of texture and model of office's moving vehicle;
S35 extracts global motion vehicle position information according to the 2D external appearance characteristics of global motion vehicle, and combines corresponding model
Attitude information in crucial characterising parameter information constitutes position and the attitude parameter of global motion vehicle;
S36 carries out Lossless Compression to global characteristics parameter set and the corresponding code coefficient of three-level knowledge dictionary, and the described overall situation is special
Parameter set is levied by the step S34 textures obtained and the crucial characterising parameter information of model and step the S35 position obtained and posture to be joined
Number is constituted;
Step 4, global coding is carried out to automobile video frequency based on global characteristics parameter;
Step 4 further comprises sub-step:
S41 is based on global vehicle characteristics parameter and carries out Motion estimation and compensation to global motion vehicle, obtains residual error parameter
Information;
S42 obtains the illumination compensation parameters of global motion vehicle;
S43 merges residual error parameter information and illumination compensation parameters, and carries out lossless compile to residual parameter information and illumination compensation parameters
Code.
2. the global coding method of urban transportation monitor video as claimed in claim 1, it is characterised in that:
Original monitor video is divided into automobile video frequency using background modeling and vehicle testing techniques in step 1 and vehicle is removed
Video, is specifically included:
S11 changes original monitor video image to yuv space, and the background model automatically updated is set up based on background subtraction;
S12 detects vehicle in original monitor video image using vehicle detection method, obtains automobile video frequency image;
Original monitor video image is subtracted automobile video frequency image by S13, obtains the video figure of the removal vehicle comprising background cavity
Picture;
Background cavity, which is overlapped, in the video image that S14 is obtained using background model to S13 fills up, and obtains and removes regarding for vehicle
Frequency image.
3. the global coding method of urban transportation monitor video as claimed in claim 1, it is characterised in that:
Step 2 further comprises sub-step:
S21 generates background image, encoded rear reconstructed background image according to the video image for removing vehicle;
S22 carries out overall motion estimation to the video image for removing vehicle, obtains global motion vector;
S23 is based on reconstructed background image and global motion vector, optionally using original coding pattern or differential coding mode
Each video block is encoded.
4. the global coding method of urban transportation monitor video as claimed in claim 1, it is characterised in that:
Sub-step S32 further comprises:
(1) the automotive universal 3D models based on network are built;
(2) the fine 3D models of vehicle are obtained;
(3) 3D model characterising parameter collection is obtained according to automotive universal 3D models;
(4) the crucial characterising parameter of model is obtained by model characterising parameter collection dimensionality reduction.
5. the global coding method of urban transportation monitor video as claimed in claim 1, it is characterised in that:
Sub-step S35 further comprises:
(1) position and angle parameter ρ=[x, y, the θ] of global motion vehicle are determinedT, x, y are global motion vehicle center in the world
The upright projection coordinate of coordinate system, θ is the angle of global motion vehicle direction of primary motion and OX axles;
(2) moving region in automobile video frequency is extracted by background modeling;
(3) two-dimensional motion vector of global motion vehicle is obtained using sparse optical flow method;
(4) direction of primary motion θ and speed v of the global motion vehicle in world coordinate system are obtained;
(5) by the two-dimensional projection of the general 3D models of global motion vehicle match and the size and shape iteration of moving region
Match somebody with somebody, obtain location parameter (x, y) of the global motion vehicle under world coordinate system.
6. a kind of global coded system of urban transportation monitor video, it is characterised in that including:
(1) Video segmentation module, for original monitor video is divided into automobile video frequency and the video of vehicle is removed;
(2) optional differential coding module, for being encoded using optional differential coding mode to the video for removing vehicle;
(3) global characteristics parameter extraction module, for extracting the global characteristics parameter set of global motion vehicle in automobile video frequency, this
Module further comprises submodule:
2D external appearance characteristic extraction modules, for extracting the 2D external appearance characteristics of global motion vehicle;
Vehicle 3D model databases build module, and for building vehicle 3D model databases, vehicle 3D model databases include each
The crucial characterising parameter collection of general 3D models, fine 3D models and model of class vehicle, model key characterising parameter is described by model
Parameter set dimensionality reduction is obtained;
Global vehicle texture dictionary builds module, for building the global vehicle line of global motion vehicle using sparse coding mode
Dictionary is managed, is further comprised:
WithFor cost function, based on the texture information of global motion vehicle, obtain
Obtain first layer knowledge dictionary, i.e., the general character visual texture information knowledge dictionary of all class global motion vehicles;
Obtain the different information r with original global operations vehicle after all kinds of global motion vehicles are rebuild through first layer knowledge dictionaryc,
WithFor cost function, based on different information rc, obtain second layer knowledge word
Allusion quotation, i.e., the three-dimensional structure and texture individual information knowledge dictionary of all kinds of global motion vehicles;
Obtain under all kinds of global motion vehicles after each individual global motion vehicle is rebuild through second layer knowledge dictionary with the original overall situation
The different information r of moving vehiclec,m, withFor cost function, base
In different information rc,m, obtain third layer knowledge dictionary, i.e., fresh information knowledge word when the individual character of individual global motion vehicle is long
Allusion quotation;
It is above-mentioned, D1Represent first layer knowledge dictionary;C represents the number of types of global motion vehicle, and c represents global motion type of vehicle
Numbering;ycRepresent all types global motion vehicle texture information;a1Presentation code coefficient;τ is balance factor, according to actual feelings
Condition and experience setting, τ is bigger, and code coefficient is more sparse;Second layer knowledge dictionary is represented, M is represented under certain class global motion vehicle
Individual vehicle quantity, m represents individual car number, a under certain class global motion vehicle2,cPresentation code coefficient;Represent the 3rd
Layer knowledge dictionary, N represents the individual vehicle quantity under the type global motion vehicle, and i is represented under the type global motion vehicle
Individual vehicle numbering;a3,c,mPresentation code coefficient;
The crucial characterising parameter data obtaining module of texture and model, for by the 2D external appearance characteristics of global motion vehicle and vehicle 3D
Model carries out characteristic matching in model database, obtains the crucial characterising parameter information of texture and model of global motion vehicle;
Position and attitude parameter acquisition module, for extracting global motion vehicle position according to the 2D external appearance characteristics of global motion vehicle
Confidence ceases, and the attitude information combined in the crucial characterising parameter information of corresponding model constitutes position and the appearance of global motion vehicle
State parameter;
Lossless compression modules, for carrying out lossless pressure to global characteristics parameter set and the corresponding code coefficient of three-level knowledge dictionary
Contracting, described global characteristics parameter set is obtained by the step S34 textures obtained and the crucial characterising parameter information of model and step S35
Position and attitude parameter constitute;
(4) global coding module, for carrying out global coding to automobile video frequency based on global characteristics parameter;
Global coding module further comprises submodule:
Residual error parameter information obtain module, for based on global vehicle characteristics parameter to global motion vehicle carry out estimation and
Motion compensation, obtains residual error parameter information;
Illumination compensation parameters acquisition module, for obtaining the illumination compensation parameters of global motion vehicle;
Lossless coding module, for merging residual error parameter information and illumination compensation parameters, and to residual parameter information and illumination compensation
Parameter is reversibly encoded.
7. the global coded system of urban transportation monitor video as claimed in claim 6, it is characterised in that:
Described position and attitude parameter acquisition module further comprises:
Position and angle parameter determining module, for determining position and angle parameter ρ=[x, y, the θ] of global motion vehicleT, x, y
It is upright projection coordinate of the global motion vehicle center in world coordinate system, θ is global motion vehicle direction of primary motion and OX axles
Angle;
Acquiring motion area module, for extracting the moving region in automobile video frequency by background modeling;
Two-dimensional motion vector obtains module, for obtaining the two-dimensional motion vector of global motion vehicle using sparse optical flow method;
Direction of primary motion and speed obtain module, for obtain global motion vehicle world coordinate system direction of primary motion θ with
Speed v;
Location parameter obtains module, for by the two-dimensional projection of the general 3D models of global motion vehicle match and moving region
Size and shape Iterative matching, obtains location parameter (x, y) of the global motion vehicle under world coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410616965.3A CN104301735B (en) | 2014-10-31 | 2014-10-31 | The overall situation coding method of urban transportation monitor video and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410616965.3A CN104301735B (en) | 2014-10-31 | 2014-10-31 | The overall situation coding method of urban transportation monitor video and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104301735A CN104301735A (en) | 2015-01-21 |
CN104301735B true CN104301735B (en) | 2017-09-29 |
Family
ID=52321268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410616965.3A Active CN104301735B (en) | 2014-10-31 | 2014-10-31 | The overall situation coding method of urban transportation monitor video and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104301735B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106210612A (en) | 2015-04-30 | 2016-12-07 | 杭州海康威视数字技术股份有限公司 | Method for video coding, coding/decoding method and device thereof |
CN105427583B (en) * | 2015-11-27 | 2017-11-07 | 浙江工业大学 | A kind of highway traffic data compression method encoded based on LZW |
CN107404653B (en) * | 2017-05-23 | 2019-10-18 | 南京邮电大学 | A kind of Parking rapid detection method of HEVC code stream |
JP7197575B2 (en) * | 2018-06-08 | 2022-12-27 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device |
CN108898842A (en) * | 2018-07-02 | 2018-11-27 | 武汉大学深圳研究院 | A kind of high efficiency encoding method and its system of multi-source monitor video |
CN108833928B (en) * | 2018-07-03 | 2020-06-26 | 中国科学技术大学 | Traffic monitoring video coding method |
CN109447037B (en) * | 2018-11-26 | 2021-04-16 | 武汉大学 | Vehicle object multilevel knowledge dictionary construction method for surveillance video compression |
CN110113616B (en) * | 2019-06-05 | 2021-06-01 | 杭州电子科技大学 | Multi-level monitoring video efficient compression coding and decoding device and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101251927A (en) * | 2008-04-01 | 2008-08-27 | 东南大学 | Vehicle detecting and tracing method based on video technique |
CN102930242A (en) * | 2012-09-12 | 2013-02-13 | 上海交通大学 | Bus type identifying method |
CN103236160A (en) * | 2013-04-07 | 2013-08-07 | 水木路拓科技(北京)有限公司 | Road network traffic condition monitoring system based on video image processing technology |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9626769B2 (en) * | 2009-09-04 | 2017-04-18 | Stmicroelectronics International N.V. | Digital video encoder system, method, and non-transitory computer-readable medium for tracking object regions |
US8457355B2 (en) * | 2011-05-05 | 2013-06-04 | International Business Machines Corporation | Incorporating video meta-data in 3D models |
-
2014
- 2014-10-31 CN CN201410616965.3A patent/CN104301735B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101251927A (en) * | 2008-04-01 | 2008-08-27 | 东南大学 | Vehicle detecting and tracing method based on video technique |
CN102930242A (en) * | 2012-09-12 | 2013-02-13 | 上海交通大学 | Bus type identifying method |
CN103236160A (en) * | 2013-04-07 | 2013-08-07 | 水木路拓科技(北京)有限公司 | Road network traffic condition monitoring system based on video image processing technology |
Also Published As
Publication number | Publication date |
---|---|
CN104301735A (en) | 2015-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104301735B (en) | The overall situation coding method of urban transportation monitor video and system | |
CN109740465B (en) | Lane line detection algorithm based on example segmentation neural network framework | |
CN109005409B (en) | Intelligent video coding method based on target detection and tracking | |
CN110688905B (en) | Three-dimensional object detection and tracking method based on key frame | |
CN111539887B (en) | Channel attention mechanism and layered learning neural network image defogging method based on mixed convolution | |
CN111598030A (en) | Method and system for detecting and segmenting vehicle in aerial image | |
CN108734713A (en) | A kind of traffic image semantic segmentation method based on multi-characteristic | |
CN107133935A (en) | A kind of fine rain removing method of single image based on depth convolutional neural networks | |
Tang et al. | Single image dehazing via lightweight multi-scale networks | |
CN102902961A (en) | Face super-resolution processing method based on K neighbor sparse coding average value constraint | |
CN110197154B (en) | Pedestrian re-identification method, system, medium and terminal integrating three-dimensional mapping of part textures | |
CN114936605A (en) | Knowledge distillation-based neural network training method, device and storage medium | |
CN110097509A (en) | A kind of restored method of local motion blur image | |
CN109063549A (en) | High-resolution based on deep neural network is taken photo by plane video moving object detection method | |
CN114842085A (en) | Full-scene vehicle attitude estimation method | |
CN105590296B (en) | A kind of single-frame images Super-Resolution method based on doubledictionary study | |
CN114170311A (en) | Binocular stereo matching method | |
Ma et al. | Preserving details in semantics-aware context for scene parsing | |
CN108021857B (en) | Building detection method based on unmanned aerial vehicle aerial image sequence depth recovery | |
CN104463962B (en) | Three-dimensional scene reconstruction method based on GPS information video | |
CN115272599A (en) | Three-dimensional semantic map construction method oriented to city information model | |
CN113436254B (en) | Cascade decoupling pose estimation method | |
CN110889868A (en) | Monocular image depth estimation method combining gradient and texture features | |
CN103020905A (en) | Sparse-constraint-adaptive NLM (non-local mean) super-resolution reconstruction method aiming at character image | |
Gomez-Donoso et al. | Three-dimensional reconstruction using SFM for actual pedestrian classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |