CN110738098A - target identification positioning and locking tracking method - Google Patents

target identification positioning and locking tracking method Download PDF

Info

Publication number
CN110738098A
CN110738098A CN201910808945.9A CN201910808945A CN110738098A CN 110738098 A CN110738098 A CN 110738098A CN 201910808945 A CN201910808945 A CN 201910808945A CN 110738098 A CN110738098 A CN 110738098A
Authority
CN
China
Prior art keywords
target
image
feature
points
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910808945.9A
Other languages
Chinese (zh)
Inventor
邓宏彬
黄春光
魏星
周惠民
潘振华
彭腾
熊镐
王迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hongda And Chuang Defense Technology Research Institute Co Ltd
Wuhan Hong Hai Xinmin Technology Co Ltd
Beijing University of Technology
Beijing Institute of Technology BIT
Original Assignee
Beijing Hongda And Chuang Defense Technology Research Institute Co Ltd
Wuhan Hong Hai Xinmin Technology Co Ltd
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hongda And Chuang Defense Technology Research Institute Co Ltd, Wuhan Hong Hai Xinmin Technology Co Ltd, Beijing University of Technology filed Critical Beijing Hongda And Chuang Defense Technology Research Institute Co Ltd
Priority to CN201910808945.9A priority Critical patent/CN110738098A/en
Publication of CN110738098A publication Critical patent/CN110738098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target identification and tracking, in particular to a method for identifying, positioning and locking and tracking targets, which comprises the steps of firstly loading a video, carrying out image acquisition and image preprocessing, acquiring th frame data and a target position by using an ultrahigh frequency radio frequency identification system, then extracting a target feature covariance, initializing to obtain a target feature template, then downsampling the template and a to-be-detected image, establishing an image pyramid, extracting candidate targets by randomly sampling video images, then extracting the candidate target feature covariance, then calculating the similarity between the target feature template and the candidate target feature covariance, then obtaining a candidate target weight, finally estimating the target position by fusion, and outputting a dynamic motion track of the target.

Description

target identification positioning and locking tracking method
Technical Field
The invention relates to the technical field of target identification and tracking, in particular to an identification, positioning, locking and tracking method for targets.
Background
The aerial precise attack has become the most important operation form and is a key factor for winning the final success, the target identification technology is an important technology support means for system intellectualization and informatization, in the modern war, the target identification technology has -wide application prospect in the military fields of early warning detection, precise guidance, battlefield command and reconnaissance, enemy identification and the like, has received attention of all countries in the world, the operation environment of the modern war is very complex, both the operation parties adopt corresponding means and technologies such as camouflage, concealment, reconnaissance, interference and the like to carry out identification and anti-identification, at present, the identification and tracking of weak and small targets under the complex background in the imaging system are directly considered to be difficult problems which are mainly expressed as long distance and low contrast, the target shape is point-like or fuzzy point-like, the geometric characteristics such as points, angles, edges and the like are not obvious, even the phenomena of discontinuous motion, target tracking, target positioning and target identification are difficult to carry out again under the current target identification and target positioning conditions such as target blocking and target positioning.
Disclosure of Invention
The invention aims to solve the defect that target identification and tracking cannot be accurately realized in the prior art, and provides methods for identifying, positioning and locking and tracking targets.
In order to achieve the purpose, the invention adopts the following technical scheme:
designing methods for identifying, positioning, locking and tracking targets, which comprises the following steps;
loading a video, carrying out image acquisition and image preprocessing, acquiring th frame data and a target position by using an ultrahigh frequency radio frequency identification system, then extracting a target feature covariance, and initializing to obtain a target feature template;
step 2: sampling the template and the image to be detected, establishing an image pyramid, extracting a candidate target through random sampling of the video image, and then extracting the covariance of the characteristics of the candidate target;
and step 3: calculating the similarity of the covariance of the target feature template and the candidate target feature, and then obtaining the weight of the candidate target;
and 4, step 4: and estimating the target position through fusion, and outputting the dynamic motion track of the target.
Preferably, in step 1, the ultrahigh frequency radio frequency identification system can perform target self-identification, and targets of the self-identification mainly include pedestrian detection and face detection, and both the parts are trained by using a deep neural network. In order to improve the accuracy and efficiency of pedestrian and face recognition in a natural scene, the following two parts of processing are carried out, namely positioning of pedestrians and faces, and then detecting pedestrians and identifying the identities of people.
Preferably, in step 2, after the image pyramid process is established, the edge points, the gradient size and the gradient direction of the image need to be obtained, the distance of the edge graph is converted to obtain a distance graph and a label graph, then the label graph is applied to establish a gradient direction characteristic graph, finally, an integral graph is solved for the edge graph, and the integral graph is applied to accelerate the image traversal.
Preferably, when the images are sampled, the images need to be registered, and the specific implementation process comprises the steps of firstly scanning and detecting the feature points of the images, secondly generating corresponding feature descriptors according to the adopted feature operators, and finally performing feature matching on the two images through a certain algorithm or function capable of judging the correlation of the feature points in the two images, wherein the image registration is based on SIFT feature image registration, and the same reconnaissance area is determined, so that the method has the advantages of keeping invariance on rotation, scale scaling and brightness change, keeping certain stability on angle change, affine transformation and noise, and meanwhile, the operation speed is high, and the positioning accuracy is high.
Preferably, SIFT feature matching is mainly divided into five steps: generating a scale space, detecting extreme points in the scale space, accurately positioning the extreme points, specifying direction parameters in key points and generating key point descriptors.
Preferably, in step 3, the target feature template computes a set of similarity metric values across the image at various poses.
Preferably, in step 4, the specific implementation of fusing the estimated target positions is as follows:
A. setting a threshold value and removing duplication to obtain a matching position;
B. matching positions are tracked from the top layer to the bottom layer of the pyramid;
C. and obtaining the accurate target position, angle and scaling coefficient.
The target identification, positioning, locking and tracking method has the advantages that the method can complete pedestrian and face positioning and character identity identification by loading the video, and obtains the specific position of the target by SIFT feature matching and using a scale space theory, thereby identifying and accurately positioning and tracking the target.
Drawings
FIG. 1 is a flow chart of the algorithm of the method for identifying, locating, locking and tracking targets according to the present invention;
FIG. 2 is a flow chart of the identification and location of kinds of targets according to the method for identifying, locating, locking and tracking of the present invention;
FIG. 3 is a network structure diagram of a positioning method of a YOLO model of kinds of target identification positioning and lock tracking methods proposed in the present invention;
FIG. 4 is a schematic diagram of extreme point detection of methods for identifying, locating, locking and tracking targets according to the present invention;
fig. 5 is a schematic diagram of feature vectors generated by the keypoint information of the target identification, positioning, locking and tracking methods provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments, not all embodiments, of the present invention .
Referring to fig. 1-2, methods for identifying, positioning and locking tracking of objects comprise the following steps;
the method comprises the following steps of 1, loading a video, carrying out image acquisition and image preprocessing, acquiring th frame data and a target position by using an ultrahigh frequency radio frequency identification system, then extracting a target characteristic covariance, and initializing to obtain a target characteristic template, wherein the ultrahigh frequency radio frequency identification system can carry out target self-identification, and the target of the self-identification mainly comprises pedestrian detection and face detection, and the two parts are trained by using a deep neural network;
referring to fig. 3, the positioning of pedestrians and human faces is realized by a positioning method using a YOLO model, the leftmost side is an input layer, then is connected with six convolutional layers, and finally is connected with 2 full-connected layers, the detection speed of the YOLO model is quite fast, which is different from a sliding window region extraction and selective region extraction method, the YOLO model uses full-image information as the input of a network in the training and prediction process, and the position and category judgment is completed through times of regression, so that the YOLO achieves faster detection effects compared with other target detection algorithms;
the figure identity recognition is based on a deep learning algorithm, an identity recognition model of the figure is designed, and the model is cascaded with two networks of face key point positioning and face attribute recognition; for picture input under a natural scene, the model firstly uses a face key point positioning network to carry out key point positioning on a face, then the face image is calibrated and intercepted according to the positioning result, and finally the aligned face image is output to a face attribute identification network to complete attribute identification.
The method comprises the following specific implementation processes that firstly, scanning and detecting the characteristic points of the image, secondly, generating a corresponding characteristic descriptor according to an adopted characteristic operator, and finally, carrying out characteristic matching on the two images through a certain algorithm or function capable of judging the correlation of the characteristic points in the two images, wherein the image registration is based on SIFT characteristics, the same reconnaissance area is determined, and therefore, the method has the advantages of high stability of fixing degrees of rotation, scale scaling, brightness change invariance, angle of view change, noise and , and fast operation speed and positioning accuracy;
the SIFT feature matching is mainly divided into five steps:
, generating a scale space, which first presents the scale space theory in the computer vision field, and G (x, y, σ) is a variable scale kernel function, which can be expressed as:
Figure BDA0002184511510000061
where σ is the mean square error of the Gaussian plus-Tailored distribution, then the two-dimensional scale space of side images can be defined as L (x, y, σ), which can be expressed as:
L(x,y,σ)=G(x,y,σ)*I(x,y)
in the formula, I (x, y) represents the coordinates of the image in space, σ represents a scale space factor, the size of the σ value represents the scale size of the image, when the smoothness is high, the outline profile feature of the image is represented, when the smoothness is low, the corresponding detail of the image is represented, and the symbol represents the meaning of convolution.
Secondly, the detection of the extreme point in the scale space, after the position of the extreme point in the scale space is determined, is equivalent to determining the key point in the image, namely, the image has no deformation for the rotation and the scaling of the image;
referring to fig. 4, the extreme point in the scale space determines key point structures in the image, so it is first necessary to find the extreme point in the scale space of the image, and the process of finding the extreme point needs to compare each pixel points with all its surrounding neighboring points, and compare it with the values of neighboring points in the image domain and the scale domain to determine the magnitude relationship between its value and the surrounding points, and in the concrete implementation, the point represented by the cross in the figure and its corresponding 9 × 2 points in the same scale, and the 9 × 2 points in the adjacent scale space, total 26 points are compared with , so the existence of the extreme point can be accurately detected in the scale space of the image as well as in the two-dimensional image space.
Thirdly, the extreme points are accurately positioned, after the extreme points in the space are detected, the positions and the scales (reaching sub-pixel precision) of the key points in the image space need to be accurately determined, fitted three-dimensional quadratic functions need to be adopted at the moment, the key points with lower contrast in the scale space are deleted, meanwhile, the DoG operator generates serious edge response during operation, unstable edge response generated is also deleted, and therefore the stability of the image matching process is enhanced by the residual obtained extreme points, and the anti-noise capability of the image can be improved.
Fourth, the direction parameter in the key point is specified, and each key point in the image needs to be specified with the direction parameter, at this time, the gradient direction of the adjacent pixels of the key point in the scale space can be used as the direction of the key point, and this process can make the operator have the characteristic of rotation invariance.
Figure BDA0002184511510000071
θ(x,y)=tan-1((L(x,y+1)-(x,y-1))/(L(x+1,y)-L(x-1,y)))
And fifthly, generating a key point descriptor, wherein after the information of the key points in the image is determined, the key points in the image are required to have the characteristic of unchanged rotation, and the requirement can be met by rotating the coordinate axis to the same direction as the key points. Generating a descriptor by taking the key point as a center, selecting a sampling window with the size of 8 multiplied by 8, and obtaining 8 direction histograms after the relative directions of the sampling point and the key point are weighted by Gaussian, and finally obtaining a 32-dimensional feature descriptor with the size of 2 multiplied by 8;
referring to fig. 5, each cell represents pixels in the scale space where the neighborhood of the keypoint is located, the arrow direction in the figure represents the gradient direction of the current pixel, the length represents the magnitude of the pixel, next, a window with the size of 4 × 4 is used to calculate histograms in the gradient directions of 8 directions, and seed points can be generated by drawing the accumulated value in each gradient direction, as can be seen from fig. 5, each histogram has a gradient direction of 8 directions, any descriptors include four histogram arrays located near the keypoint, and for any keypoints, a 128-dimensional SIFT feature vector is generated.
And step 3: calculating the similarity of the covariance of the target feature template and the candidate target feature, traversing the image by the target feature template in various poses to calculate a similarity measurement value set, and then obtaining the weight of the candidate target;
and 4, step 4: the target position is estimated by fusion, and the dynamic motion track of the target is output, and the specific implementation mode of the target position is estimated by fusion as follows:
A. setting a threshold value and removing duplication to obtain a matching position;
B. matching positions are tracked from the top layer to the bottom layer of the pyramid;
C. and obtaining the accurate target position, angle and scaling coefficient.
The invention can finish the positioning of the pedestrians and the human faces and the identification of the personages by loading the video; and the specific position of the target is obtained by SIFT feature matching and using a scale space theory, so that the target is identified and accurately positioned and tracked.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (7)

1, kinds of target identification location and lock tracking method, characterized by that, including the following steps;
loading a video, carrying out image acquisition and image preprocessing, acquiring th frame data and a target position by using an ultrahigh frequency radio frequency identification system, then extracting a target feature covariance, and initializing to obtain a target feature template;
step 2: sampling the template and the image to be detected, establishing an image pyramid, extracting a candidate target through random sampling of the video image, and then extracting the covariance of the characteristics of the candidate target;
and step 3: calculating the similarity of the covariance of the target feature template and the candidate target feature, and then obtaining the weight of the candidate target;
and 4, step 4: and estimating the target position through fusion, and outputting the dynamic motion track of the target.
2. The method for identifying, locating, locking and tracking targets of claim 1, wherein in step 1, the UHF RFID system is capable of performing target self-identification, and the targets of self-identification mainly include pedestrian detection and face detection, both of which are trained by using deep neural network.
3. The method for identifying, locating, locking and tracking targets according to claim 1, wherein in step 2, after the image pyramid process is established, the edge points, the gradient size and the gradient direction of the image are acquired, the distance of the edge map is transformed to obtain a distance map and a label map, the label map is used to establish a gradient direction feature map, the integral map is obtained from the edge map, and the integral map is used to accelerate the image traversal.
4. The method for identifying, positioning, locking and tracking targets according to claim 3, wherein when sampling images, the images are registered, and specifically, the method comprises the steps of firstly scanning and detecting the feature points of the images, secondly generating corresponding feature descriptors according to the adopted feature operators, and finally performing feature matching on the two images through a certain algorithm or function capable of judging the correlation of the feature points in the two images, wherein the image registration is based on SIFT feature image registration, and the same reconnaissance area is determined, so that the method has the advantages of keeping invariance on rotation, scale and brightness change, keeping stability on angular change, affine transformation and noise to a certain extent, and meanwhile, the operation speed is high, and the positioning accuracy is high.
5. The method of claim 3, wherein SIFT feature matching comprises five steps, including generation of scale space, detection of extreme points in scale space, accurate location of extreme points, specification of direction parameters in key points, and generation of key point descriptors.
6. The method of claim 1, wherein in step 3, the target feature templates are used to compute similarity metric value sets by traversing the images at various poses.
7. The method for identifying, locating, locking and tracking targets according to claim 1, wherein in step 4, the fusion of the estimated target positions is implemented as follows:
A. setting a threshold value and removing duplication to obtain a matching position;
B. matching positions are tracked from the top layer to the bottom layer of the pyramid;
C. and obtaining the accurate target position, angle and scaling coefficient.
CN201910808945.9A 2019-08-29 2019-08-29 target identification positioning and locking tracking method Pending CN110738098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910808945.9A CN110738098A (en) 2019-08-29 2019-08-29 target identification positioning and locking tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910808945.9A CN110738098A (en) 2019-08-29 2019-08-29 target identification positioning and locking tracking method

Publications (1)

Publication Number Publication Date
CN110738098A true CN110738098A (en) 2020-01-31

Family

ID=69267454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910808945.9A Pending CN110738098A (en) 2019-08-29 2019-08-29 target identification positioning and locking tracking method

Country Status (1)

Country Link
CN (1) CN110738098A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697702A (en) * 2022-03-23 2022-07-01 咪咕文化科技有限公司 Audio and video marking method, device, equipment and storage medium
CN115493512A (en) * 2022-08-10 2022-12-20 思看科技(杭州)股份有限公司 Data processing method, three-dimensional scanning system, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107671896A (en) * 2017-05-19 2018-02-09 重庆誉鸣科技有限公司 Fast vision localization method and system based on SCARA robots
CN108671453A (en) * 2018-03-19 2018-10-19 北京领创拓展科技发展有限公司 A kind of water cannon automatic control system
US20190114804A1 (en) * 2017-10-13 2019-04-18 Qualcomm Incorporated Object tracking for neural network systems
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107671896A (en) * 2017-05-19 2018-02-09 重庆誉鸣科技有限公司 Fast vision localization method and system based on SCARA robots
US20190114804A1 (en) * 2017-10-13 2019-04-18 Qualcomm Incorporated Object tracking for neural network systems
CN108671453A (en) * 2018-03-19 2018-10-19 北京领创拓展科技发展有限公司 A kind of water cannon automatic control system
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109949340A (en) * 2019-03-04 2019-06-28 湖北三江航天万峰科技发展有限公司 Target scale adaptive tracking method based on OpenCV

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAAREN MAY 等: "Moving target detection for sense and avoid using regional phase correlation", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》, pages 4767 - 4772 *
孙丰岩: "基于协方差匹配的自适应核跟踪", 《中国优秀硕士学位论文全文数据库 信息科技辑 (月刊)》, no. 07, pages 138 - 1677 *
李少军 等: "基于区域协方差矩阵的末制导目标跟踪", 《激光与红外》, vol. 40, no. 3, pages 330 - 333 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697702A (en) * 2022-03-23 2022-07-01 咪咕文化科技有限公司 Audio and video marking method, device, equipment and storage medium
CN114697702B (en) * 2022-03-23 2024-01-30 咪咕文化科技有限公司 Audio and video marking method, device, equipment and storage medium
CN115493512A (en) * 2022-08-10 2022-12-20 思看科技(杭州)股份有限公司 Data processing method, three-dimensional scanning system, electronic device, and storage medium
CN115493512B (en) * 2022-08-10 2023-06-13 思看科技(杭州)股份有限公司 Data processing method, three-dimensional scanning system, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110097093B (en) Method for accurately matching heterogeneous images
Zhang et al. Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation
Li et al. LNIFT: Locally normalized image for rotation invariant multimodal feature matching
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
Steder et al. Robust place recognition for 3D range data based on point features
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
Yang et al. Fast and accurate vanishing point detection and its application in inverse perspective mapping of structured road
CN110033484B (en) High canopy density forest sample plot tree height extraction method combining UAV image and TLS point cloud
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
Chen et al. Identification of autonomous landing sign for unmanned aerial vehicle based on faster regions with convolutional neural network
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
Wu et al. Recognition of airport runways in FLIR images based on knowledge
CN110738098A (en) target identification positioning and locking tracking method
Liu et al. Keypoint matching by outlier pruning with consensus constraint
Fan et al. Fresco: Frequency-domain scan context for lidar-based place recognition with translation and rotation invariance
Yang et al. Fast and accurate vanishing point detection in complex scenes
Wang et al. Target recognition and localization of mobile robot with monocular PTZ camera
Lin et al. Lane departure identification on highway with searching the region of interest on hough space
Huang et al. FAST and FLANN for feature matching based on SURF
Hänsch et al. Machine-learning based detection of corresponding interest points in optical and SAR images
Ren et al. SAR image matching method based on improved SIFT for navigation system
CN111626096B (en) Three-dimensional point cloud data interest point extraction method
Ma et al. Global localization in 3d maps for structured environment
Micheal et al. Comparative analysis of SIFT and SURF on KLT tracker for UAV applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200131

WD01 Invention patent application deemed withdrawn after publication