CN104299245A - Augmented reality tracking method based on neural network - Google Patents

Augmented reality tracking method based on neural network Download PDF

Info

Publication number
CN104299245A
CN104299245A CN201410539449.5A CN201410539449A CN104299245A CN 104299245 A CN104299245 A CN 104299245A CN 201410539449 A CN201410539449 A CN 201410539449A CN 104299245 A CN104299245 A CN 104299245A
Authority
CN
China
Prior art keywords
neural network
destination object
partiald
layer
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410539449.5A
Other languages
Chinese (zh)
Other versions
CN104299245B (en
Inventor
樊春玲
姜青山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201410539449.5A priority Critical patent/CN104299245B/en
Publication of CN104299245A publication Critical patent/CN104299245A/en
Application granted granted Critical
Publication of CN104299245B publication Critical patent/CN104299245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an augmented reality tracking method based on a neural network. The augmented reality tracking method includes the steps that S1, a neural network structure is established; S2, motion data of a target object in an image sequence serve as training data so as to train the neural network, and weights of all layers in the neural network are adjusted; S3, the angular point features of a first image in a video are extracted; S4, through neural network prediction, the motion tendency of the target object is acquired; S5, the target object is tracked in the direction of the motion tendency of the target object and in the follow-up image sequence. According to the method, the motion tendency of the target object is predicted through the neural network, the search range and the number of iteration times in the calculation process are reduced, thus, the tracking calculation time is shortened, and the tracking efficiency is improved.

Description

Based on the augmented reality tracking of neural network
Technical field
The present invention relates to augmented reality field, particularly relate to a kind of augmented reality tracking based on neural network.
Background technology
Augmented reality produces virtual three-dimensional model and the displaying be superimposed upon by dummy model in real world, to reach the technology of augmented reality, also referred to as mixed reality by computing machine.
At present, a lot of augmented reality system is the augmented reality system based on mark, needs in advance to prepare mark and the characteristic information extracting mark is stored in database, carries out identifying, follows the tracks of, locates and Additive Model when camera captures mark again time.Can make Quick Response Code, circle etc. based on mark in the augmented reality system of mark, Tencent Technology (Shenzhen) Co., Ltd. disclosed a kind of augmented reality implementation method and device (Chinese Patent Application No.: 201310031075.1) of Quick Response Code in 2014.Augmented reality system based on mark is easy to realize, but owing to will prepare mark in advance, makes its application scenarios greatly limited.
But due to mark will be prepared in advance, there is the shortcoming that application scenarios is limited in the existing augmented reality system based on mark; Existing tracking not to be taken exercises trend prediction to the motion of destination object, and it is longer therefore to follow the tracks of computing time.
Unmarked augmented reality system image zooming-out physical feature is also followed the tracks of these physical features in successive image sequence, then locates further and Additive Model.Unmarked augmented reality system overcomes based on the limited shortcoming of mark augmented reality system application scenarios, can be applied in real time in unknown scene.
Therefore, for above-mentioned technical matters, be necessary to provide a kind of augmented reality tracking based on neural network.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of augmented reality tracking based on neural network, by the movement tendency of neural network prediction planar target, predicted position is followed the tracks of as tracking initial position, improves the accuracy to target following and real-time.
In order to achieve the above object, the technical scheme that provides of the embodiment of the present invention is as follows:
Based on an augmented reality tracking for neural network, described method comprises:
S1, set up neural network structure;
In S2, use image sequence, the exercise data of destination object is as training data neural network training, the weight of each layer in adjustment neural network;
The Corner Feature of first two field picture in S3, extraction video;
S4, by neural network prediction, obtain the movement tendency of destination object;
S5, in successive image sequence, destination object to be followed the tracks of on the movement tendency direction of destination object.
As a further improvement on the present invention, described neural network structure comprises 1 input layer, 1 hidden layer and 1 output layer, the neuron number of input layer is K, the neuron number of hidden layer is M, output layer neuron number is 1, and the weight wherein between last layer neuron and each neuron of lower one deck is w ij, initial weight gets random constant w ij∈ [-0.5,0.5], activation function chooses Sigmoid function
As a further improvement on the present invention, described step S2 is specially:
S21, preparation training data;
S22, generalized process is returned to training data;
S23, by the training data neural network training after standardization processing, the weight of each layer in adjustment neural network, thus train optimum neural network.
As a further improvement on the present invention, described step S21 is specially:
Obtain the coordinate figure P of destination object in every frame i(x i, y i), (i=1 ..., N), N is video frame number;
Calculate the variation delta u of destination object displacement between adjacent two frames i(Δ x i, Δ y i), (i=1 ... N), Δ x ifor destination object is along the variable quantity of X-direction, Δ y ifor destination object is along the variable quantity of Y direction, the upper left corner of image is true origin, and the variable quantity of first two field picture is Δ u 1(0,0).
As a further improvement on the present invention, in described step S22, standardization processing is min-max standardization processing: wherein, v ifor raw data, v i' for returning the data after generalized, min is the minimum value of raw data, max is the maximal value of raw data.
As a further improvement on the present invention, described step S23 is specially:
By the training data Δ x after standardization i(i=1 ... K) input neural network input layer, obtain the predicted value of K+1 displacement variable at neural network output layer, i.e. displacement variable Δ u (the Δ x' of destination object in K+1 frame and K frame k+1, Δ y' k+1);
Contrast actual value Δ u i(Δ x k, Δ y k) and predicted value Δ u (Δ x' k+1, Δ y' k+1) difference, adjustment hidden layer and output layer between weight; Transmission error forward again, the error between adjustment input layer and hidden layer; Repeated multiple times training is until train optimum neural network.
As a further improvement on the present invention, the Corner Feature in described step S3 is specially:
If the matrix of second derivatives of gradation of image intensity H = ∂ 2 I ∂ x 2 ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ y 2 One less in middle matrix H two eigenwerts threshold value being greater than setting, then think that this point patterns is Corner Feature.
As a further improvement on the present invention, described step S4 is specially:
Obtain destination object coordinate P in the picture i(x i, y i), (i=1 ..., K);
The shift offset Δ u of consecutive frame destination object is calculated in the coordinate sequence obtained i(Δ x i, Δ y i), (i=1 ... K);
Front K two field picture destination object shift offset is inputted the neural network prediction trained and obtains K+1 shift offset Δ u (Δ x' k+1, Δ y' k+1);
The estimated position of this frame destination object was calculated before K+1 frame is followed the tracks of:
P' k+1(x' k+1,y' k+1)=(x k+Δx' k+1,y k+Δy' k+1);
By the estimated position P' obtained k+1(x' k+1, y' k+1) as the initial position followed the tracks of.
As a further improvement on the present invention, described step S5 is specially:
By estimated position P' k+1(x' k+1, y' k+1) adopt KLT algorithm keeps track destination object on the original image as initial position.
The present invention has following beneficial effect:
The present invention can be applicable to unknown scene plane follow the tracks of, by the movement tendency of neural network prediction destination object, reduce the iterations in hunting zone and computation process, thus reduce follow the tracks of computing time, improve tracking efficiency.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, the accompanying drawing that the following describes is only some embodiments recorded in the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of a kind of augmented reality tracking based on neural network of the present invention.
Fig. 2 is the process flow diagram based on the augmented reality tracking of neural network in the present invention one specific embodiment.
Fig. 3 is the structural representation of neural network in the present invention one specific embodiment.
Fig. 4 is the functional arrangement of Sigmoid in the present invention one specific embodiment.
Embodiment
Technical scheme in the present invention is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
Shown in ginseng Fig. 1, the invention discloses a kind of augmented reality tracking based on neural network, comprising:
S1, set up neural network structure;
In S2, use image sequence, the exercise data of destination object is as training data neural network training, the weight of each layer in adjustment neural network;
The Corner Feature of first two field picture in S3, extraction video;
S4, by neural network prediction, obtain the movement tendency of destination object;
S5, in successive image sequence, destination object to be followed the tracks of on the movement tendency direction of destination object.
Below embodiments of the invention are elaborated: the present embodiment premised on technical scheme of the present invention under implement, give detailed embodiment by reference to the accompanying drawings, but protection scope of the present invention is not limited to following embodiment.
Shown in ginseng Fig. 2, the present embodiment is mainly divided into two large divisions: (one) training of human artificial neural networks; (2) follow the tracks of by the neural network prediction object of which movement trend trained.
(1) training of human artificial neural networks
Artificial neural network is a kind of machine learning algorithm, and neural network is one group of I/O unit connected, and wherein each connection has the weight of an associated.Weight constantly adjusts at learning phase, correctly predicts to help network.Most popular neural network algorithm is back-propagating, and this algorithm learns on multilayer feedforward neural network.
The present embodiment structure multilayer feedforward neural network adopts the motion model of Back Propagation Algorithm learning objective object, with the exercise data of destination object in image sequence as training data neural network training, then the neural network model trained is used for the movement tendency of target of prediction object.
1. feedforward neural network
Shown in ginseng Fig. 3, the present embodiment adopts 3 layers of Architecture of Feed-forward Neural Network, comprise 1 input layer, 1 hidden layer and 1 output layer, the neuron number of input layer is K, the neuron number of hidden layer is M, output layer neuron number is 1, and the connection (weight) wherein between last layer neuron and each neuron of lower one deck is w ij, initial weight gets random constant w ij∈ [-0.5,0.5], activation function chooses Sigmoid function (shown in ginseng Fig. 4).
2. neural network training
With the exercise data of destination object in image sequence as training data neural network training, the constantly each layer weight of adjustment neural network.
2.1 training datas prepare
The data that training data can adopt standard-track database to provide also can adopt manual scaling method to obtain, and in setting video, total N frame, so can obtain the coordinate figure P of destination object in every frame i(x i, y i), (i=1 ..., N), then calculate the variation delta u of destination object displacement between adjacent two frames i(Δ x i, Δ y i), (i=1 ... N), Δ x ifor destination object is along the variable quantity of X-direction, Δ y ifor destination object is along the variable quantity of Y direction, wherein the upper left corner of image is true origin, and the variable quantity of first two field picture is set to Δ u 1(0,0).
2.2 data normalization
Carry out standardization to training data to contribute to accelerating learning process, usually, to training data standardization, they are fallen between 0.0 and 1.0.The present embodiment is by the changes in coordinates amount Δ u of destination object between consecutive frame i(Δ x i, Δ y i), (i=1 ... N) train as neural network input value input neural network, first return generalized process to data, the present embodiment adopts min-max standardization:
v i ′ = v i - min max - min
Wherein, v ifor raw data, v i' for returning the data after generalized, min is the minimum value of raw data, max is the maximal value of raw data.
2.3 neural network training
During training, the connection between each layer neuron of neural network.By normalized data Δ x i(i=1 ... K) input neural network input layer, the predicted value of K+1 displacement variable can be obtained at neural network output layer, i.e. displacement variable Δ u (the Δ x' of destination object in K+1 frame and K frame k+1, Δ y' k+1).Contrast actual value Δ u (Δ x' k+1, Δ y' k+1) with the difference of predicted value, the weight between adjustment hidden layer and output layer; Transmission error forward again, the error between adjustment input layer and hidden layer; Repeated multiple times training is until reach end condition:
A. the weight adjusting amount that the last cycle is all is all less than appointment threshold value, or
B. preassigned periodicity is exceeded.
Training terminates, and every layer of weight adjusting to optimum, thus trains optimum neural network.
(2) target of prediction is followed the tracks of
1. feature detection
First, Corner Feature information is extracted to the first two field picture of video.Adopt Shi-Tomasi Corner Feature by the present embodiment, this feature improves on the basis of traditional Harris angle point, and the basis of definition is the Hessian matrix of the matrix of second derivatives-two dimension of gradation of image intensity:
H = ∂ 2 I ∂ x 2 ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ y 2 .
According to Shi-Tomasi definition, if one less in two of matrix H eigenwerts threshold value being greater than setting, then think that this point is stronger unique point.
First the present embodiment carries out Corner Detection P to the first two field picture in video i(i=1 ..., N).Because the angle point detected calculates to do further to follow the tracks of, so be necessary to obtain more accurate corner location information, sub-pixel Corner character method is adopted to obtain sub-pix rank corner location information here.
2. predicted motion trend
Traditional pyramid Kanade-Lucas-Tomasi (KLT) track algorithm of the present embodiment is followed the tracks of K two field picture front in predicted picture sequence, obtains destination object coordinate P in the picture i(x i, y i), (i=1 ..., K).The shift offset Δ u of consecutive frame destination object is calculated in the coordinate sequence obtained i(Δ x i, Δ y i), (i=1 ... K), front K two field picture destination object shift offset is inputted the neural network prediction trained and obtains K+1 shift offset Δ u (Δ x' k+1, Δ y' k+1).The estimated position of this frame destination object was calculated before K+1 frame is followed the tracks of:
P' k+1(x' k+1,y' k+1)=(x k+Δx' k+1,y k+Δy' k+1),
By the estimated position P' obtained k+1(x' k+1, y' k+1) as the initial position followed the tracks of.
3. based on predicting tracing
Pyramid KLT algorithm adopts low-pass filtering Downsapling method to set up image pyramid to each two field picture, and in pyramid model, adopts Newton iteration method successively to search for destination object from the superiors' (resolution is minimum) to the bottom (original image layer).The initial value that the light stream value being specially the search of most top layer is searched for as lower one deck is searched for, and successively iterative search is until the pyramid bottom i.e. original image layer can obtain the position of destination object.
The present embodiment adopts the displacement (i.e. light stream initial value) of neural network prediction target, image pyramid model need not be set up again successively search for, directly using predicted position as initial value on the original image with traditional this destination object of KLT algorithm keeps track, because the estimated position of neural network prediction is closer to the actual position of destination object, therefore follow the tracks of iterations in computation process to reduce, the position of destination object can be found sooner, so real-time and the accuracy of tracking can be improved.
As can be seen from above embodiment, the plane that the present invention can be applicable to unknown scene is followed the tracks of, and by the movement tendency of neural network prediction destination object, reduces the iterations in hunting zone and computation process, thus reduce follow the tracks of computing time, improve tracking efficiency.
To those skilled in the art, obviously the invention is not restricted to the details of above-mentioned one exemplary embodiment, and when not deviating from spirit of the present invention or essential characteristic, the present invention can be realized in other specific forms.Therefore, no matter from which point, all should embodiment be regarded as exemplary, and be nonrestrictive, scope of the present invention is limited by claims instead of above-mentioned explanation, and all changes be therefore intended in the implication of the equivalency by dropping on claim and scope are included in the present invention.Any Reference numeral in claim should be considered as the claim involved by limiting.
In addition, be to be understood that, although this instructions is described according to embodiment, but not each embodiment only comprises an independently technical scheme, this narrating mode of instructions is only for clarity sake, those skilled in the art should by instructions integrally, and the technical scheme in each embodiment also through appropriately combined, can form other embodiments that it will be appreciated by those skilled in the art that.

Claims (9)

1. based on an augmented reality tracking for neural network, it is characterized in that, described method comprises:
S1, set up neural network structure;
In S2, use image sequence, the exercise data of destination object is as training data neural network training, the weight of each layer in adjustment neural network;
The Corner Feature of first two field picture in S3, extraction video;
S4, by neural network prediction, obtain the movement tendency of destination object;
S5, in successive image sequence, destination object to be followed the tracks of on the movement tendency direction of destination object.
2. method according to claim 1, it is characterized in that, described neural network structure comprises 1 input layer, 1 hidden layer and 1 output layer, the neuron number of input layer is K, the neuron number of hidden layer is M, output layer neuron number is 1, and the weight wherein between last layer neuron and each neuron of lower one deck is w ij, initial weight gets random constant w ij∈ [-0.5,0.5], activation function chooses Sigmoid function
3. method according to claim 2, is characterized in that, described step S2 is specially:
S21, preparation training data;
S22, generalized process is returned to training data;
S23, by the training data neural network training after standardization processing, the weight of each layer in adjustment neural network, thus train optimum neural network.
4. method according to claim 3, is characterized in that, described step S21 is specially:
Obtain the coordinate figure P of destination object in every frame i(x i, y i), (i=1 ..., N), N is video frame number;
Calculate the variation delta u of destination object displacement between adjacent two frames i(Δ x i, Δ y i), (i=1 ... N), Δ x ifor destination object is along the variable quantity of X-direction, Δ y ifor destination object is along the variable quantity of Y direction, the upper left corner of image is true origin, and the variable quantity of first two field picture is Δ u 1(0,0).
5. method according to claim 3, is characterized in that, in described step S22, standardization processing is min-max standardization processing: wherein, v ifor raw data, v i' for returning the data after generalized, min is the minimum value of raw data, max is the maximal value of raw data.
6. method according to claim 3, is characterized in that, described step S23 is specially:
By the training data Δ x after standardization i(i=1 ... K) input neural network input layer, obtain the predicted value of K+1 displacement variable at neural network output layer, i.e. displacement variable Δ u (the Δ x' of destination object in K+1 frame and K frame k+1, Δ y' k+1);
Contrast actual value Δ u i(Δ x k, Δ y k) and predicted value Δ u (Δ x' k+1, Δ y' k+1) difference, adjustment hidden layer and output layer between weight; Transmission error forward again, the error between adjustment input layer and hidden layer; Repeated multiple times training is until train optimum neural network.
7. method according to claim 1, is characterized in that, the Corner Feature in described step S3 is specially:
If the matrix of second derivatives of gradation of image intensity H = ∂ 2 I ∂ x 2 ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ x ∂ y ∂ 2 I ∂ y 2 One less in middle matrix H two eigenwerts threshold value being greater than setting, then think that this point patterns is Corner Feature.
8. method according to claim 1, is characterized in that, described step S4 is specially:
Obtain destination object coordinate P in the picture i(x i, y i), (i=1 ..., K);
The shift offset Δ u of consecutive frame destination object is calculated in the coordinate sequence obtained i(Δ x i, Δ y i), (i=1 ... K);
Front K two field picture destination object shift offset is inputted the neural network prediction trained and obtains K+1 shift offset Δ u (Δ x' k+1, Δ y' k+1);
The estimated position of this frame destination object was calculated before K+1 frame is followed the tracks of:
P' k+1(x' k+1,y' k+1)=(x k+Δx' k+1,y k+Δy' k+1);
By the estimated position P' obtained k+1(x' k+1, y' k+1) as the initial position followed the tracks of.
9. method according to claim 8, is characterized in that, described step S5 is specially:
By estimated position P' k+1(x' k+1, y' k+1) adopt KLT algorithm keeps track destination object on the original image as initial position.
CN201410539449.5A 2014-10-13 2014-10-13 Augmented reality tracking based on neutral net Active CN104299245B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410539449.5A CN104299245B (en) 2014-10-13 2014-10-13 Augmented reality tracking based on neutral net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410539449.5A CN104299245B (en) 2014-10-13 2014-10-13 Augmented reality tracking based on neutral net

Publications (2)

Publication Number Publication Date
CN104299245A true CN104299245A (en) 2015-01-21
CN104299245B CN104299245B (en) 2017-12-26

Family

ID=52318967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410539449.5A Active CN104299245B (en) 2014-10-13 2014-10-13 Augmented reality tracking based on neutral net

Country Status (1)

Country Link
CN (1) CN104299245B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976400A (en) * 2016-05-10 2016-09-28 北京旷视科技有限公司 Object tracking method and device based on neural network model
CN106444753A (en) * 2016-09-20 2017-02-22 智易行科技(武汉)有限公司 Intelligent following method for human posture judgment based on artificial neural network
CN108254741A (en) * 2018-01-16 2018-07-06 中国人民解放军海军航空大学 Targetpath Forecasting Methodology based on Recognition with Recurrent Neural Network
CN108352072A (en) * 2016-08-08 2018-07-31 松下知识产权经营株式会社 Object tracking methods, object tracking apparatus and program
CN108399628A (en) * 2015-09-30 2018-08-14 快图有限公司 Method and system for tracking object
CN108460829A (en) * 2018-04-16 2018-08-28 广州智能装备研究院有限公司 A kind of 3-D view register method for AR systems
US10198655B2 (en) 2017-01-24 2019-02-05 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
CN109903315A (en) * 2019-03-08 2019-06-18 腾讯科技(深圳)有限公司 Method, apparatus, equipment and readable storage medium storing program for executing for light stream prediction
CN111033524A (en) * 2017-09-20 2020-04-17 奇跃公司 Personalized neural network for eye tracking
CN112345251A (en) * 2020-11-04 2021-02-09 山东科技大学 Mechanical intelligent fault diagnosis method based on signal resolution enhancement
CN113065638A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Neural network compression method and related equipment thereof
CN114152189A (en) * 2021-11-09 2022-03-08 武汉大学 Four-quadrant detector light spot positioning method based on feedforward neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1738426A (en) * 2005-09-09 2006-02-22 南京大学 Video motion goal division and track method
CN103077538A (en) * 2013-01-15 2013-05-01 西安电子科技大学 Adaptive tracking method of biomimetic-pattern recognized targets
US20140003720A1 (en) * 2012-06-29 2014-01-02 Behavioral Recognition Systems, Inc. Adaptive illuminance filter in a video analysis system
US20140169631A1 (en) * 2011-08-05 2014-06-19 Megachips Corporation Image recognition apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1738426A (en) * 2005-09-09 2006-02-22 南京大学 Video motion goal division and track method
US20140169631A1 (en) * 2011-08-05 2014-06-19 Megachips Corporation Image recognition apparatus
US20140003720A1 (en) * 2012-06-29 2014-01-02 Behavioral Recognition Systems, Inc. Adaptive illuminance filter in a video analysis system
CN103077538A (en) * 2013-01-15 2013-05-01 西安电子科技大学 Adaptive tracking method of biomimetic-pattern recognized targets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曲仕茹 等: "采用Kalman_BP神经网络的视频序列多目标检测与跟踪", 《红外与激光工程》 *
王擘 等: "GA-BP神经网络在雷达目标跟踪中的应用研究", 《火控雷达技术》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11948350B2 (en) 2015-09-30 2024-04-02 Fotonation Limited Method and system for tracking an object
CN108399628A (en) * 2015-09-30 2018-08-14 快图有限公司 Method and system for tracking object
CN108399628B (en) * 2015-09-30 2023-09-05 快图有限公司 Method and system for tracking objects
CN105976400A (en) * 2016-05-10 2016-09-28 北京旷视科技有限公司 Object tracking method and device based on neural network model
CN105976400B (en) * 2016-05-10 2017-06-30 北京旷视科技有限公司 Method for tracking target and device based on neural network model
CN108352072A (en) * 2016-08-08 2018-07-31 松下知识产权经营株式会社 Object tracking methods, object tracking apparatus and program
CN108352072B (en) * 2016-08-08 2023-11-03 松下知识产权经营株式会社 Object tracking method, object tracking device, and recording medium
CN106444753A (en) * 2016-09-20 2017-02-22 智易行科技(武汉)有限公司 Intelligent following method for human posture judgment based on artificial neural network
US10452946B2 (en) 2017-01-24 2019-10-22 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
US11062167B2 (en) 2017-01-24 2021-07-13 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
US10198655B2 (en) 2017-01-24 2019-02-05 Ford Global Technologies, Llc Object detection using recurrent neural network and concatenated feature map
CN111033524A (en) * 2017-09-20 2020-04-17 奇跃公司 Personalized neural network for eye tracking
CN108254741B (en) * 2018-01-16 2021-02-09 中国人民解放军海军航空大学 Target track prediction method based on cyclic neural network
CN108254741A (en) * 2018-01-16 2018-07-06 中国人民解放军海军航空大学 Targetpath Forecasting Methodology based on Recognition with Recurrent Neural Network
CN108460829B (en) * 2018-04-16 2019-05-24 广州智能装备研究院有限公司 A kind of 3-D image register method for AR system
CN108460829A (en) * 2018-04-16 2018-08-28 广州智能装备研究院有限公司 A kind of 3-D view register method for AR systems
CN109903315A (en) * 2019-03-08 2019-06-18 腾讯科技(深圳)有限公司 Method, apparatus, equipment and readable storage medium storing program for executing for light stream prediction
CN109903315B (en) * 2019-03-08 2023-08-25 腾讯科技(深圳)有限公司 Method, apparatus, device and readable storage medium for optical flow prediction
CN112345251A (en) * 2020-11-04 2021-02-09 山东科技大学 Mechanical intelligent fault diagnosis method based on signal resolution enhancement
CN112345251B (en) * 2020-11-04 2022-03-04 山东科技大学 Mechanical intelligent fault diagnosis method based on signal resolution enhancement
CN113065638A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Neural network compression method and related equipment thereof
CN114152189A (en) * 2021-11-09 2022-03-08 武汉大学 Four-quadrant detector light spot positioning method based on feedforward neural network

Also Published As

Publication number Publication date
CN104299245B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN104299245A (en) Augmented reality tracking method based on neural network
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN108369643B (en) Method and system for 3D hand skeleton tracking
CN106650687B (en) Posture correction method based on depth information and skeleton information
JP7178396B2 (en) Method and computer system for generating data for estimating 3D pose of object included in input image
CN104915970B (en) A kind of multi-object tracking method based on Track association
CN102867311B (en) Method for tracking target and target following equipment
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
Barranco et al. Contour motion estimation for asynchronous event-driven cameras
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN108803617A (en) Trajectory predictions method and device
CN106875425A (en) A kind of multi-target tracking system and implementation method based on deep learning
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN106462976A (en) Method of tracking shape in a scene observed by an asynchronous light sensor
CN106169188A (en) A kind of method for tracing object based on the search of Monte Carlo tree
CN104598915A (en) Gesture recognition method and gesture recognition device
CN106599994A (en) Sight line estimation method based on depth regression network
CN108681700A (en) A kind of complex behavior recognition methods
Zou et al. Reducing footskate in human motion reconstruction with ground contact constraints
WO2022095514A1 (en) Image detection method and apparatus, electronic device, and storage medium
CN102682452A (en) Human movement tracking method based on combination of production and discriminant
CN104035557A (en) Kinect action identification method based on joint activeness
CN111178170B (en) Gesture recognition method and electronic equipment
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN106780560A (en) A kind of feature based merges the bionic machine fish visual tracking method of particle filter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant