CN110188753A - One kind being based on dense connection convolutional neural networks target tracking algorism - Google Patents

One kind being based on dense connection convolutional neural networks target tracking algorism Download PDF

Info

Publication number
CN110188753A
CN110188753A CN201910425297.9A CN201910425297A CN110188753A CN 110188753 A CN110188753 A CN 110188753A CN 201910425297 A CN201910425297 A CN 201910425297A CN 110188753 A CN110188753 A CN 110188753A
Authority
CN
China
Prior art keywords
characteristic pattern
network
neural networks
convolutional neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910425297.9A
Other languages
Chinese (zh)
Inventor
武传营
李凡平
石柱国
李得洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Isa Data Technology Co Ltd
Beijing Yisa Technology Co Ltd
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Qingdao Isa Data Technology Co Ltd
Beijing Yisa Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Isa Data Technology Co Ltd, Beijing Yisa Technology Co Ltd filed Critical Qingdao Isa Data Technology Co Ltd
Priority to CN201910425297.9A priority Critical patent/CN110188753A/en
Publication of CN110188753A publication Critical patent/CN110188753A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses one kind to be based on dense connection convolutional neural networks target tracking algorism, the following steps are included: S101, extraction input picture feature: in advance by using dense connection convolutional neural networks as feature extraction network, obtaining and extract output characteristics of image;S103, output characteristic pattern processing: bilinear interpolation is carried out to extraction output characteristics of image is obtained, obtains the characteristic pattern of bilinear interpolation;S105, acquisition characteristic pattern is calculated.The utility model has the advantages that the present invention replaces the feature extraction network of the twin convolutional network of tradition to make network obtain stronger ability in feature extraction using the convolutional neural networks of dense connection, make bilinear interpolation by the characteristic pattern exported to feature extraction network, improve characteristic pattern resolution ratio, improve tracing algorithm positioning accuracy, in addition RPN layers are added on the basis of traditional twin convolutional network, enhance track algorithm to the separating capacity of target and background.

Description

One kind being based on dense connection convolutional neural networks target tracking algorism
Technical field
The present invention relates to deep learnings and technical field of computer vision, it particularly relates to which a kind of be based on dense connection Convolutional neural networks target tracking algorism.
Background technique
Target following technology is an important branch in Computer Vision Task, is widely used in automatic Pilot, depending on The fields such as frequency monitoring and robot.Conventional target track algorithm, which depends on, marks feature and correlation filtering (such as by hand KCF, TLD etc.), there is higher frame per second, but accuracy rate and robustness are lower, it is difficult to meet application request.In recent years With the rise of artificial intelligence and deep learning, convolutional neural networks algorithm steps into target tracking domain, and achieves Original performance and achievement, wherein algorithm frame based on twin convolutional network is by its good performance and succinct Network structure, international computer vision top meeting in recent years and tracking race in receive very big concern.
RPN (region proposal network) layer is proposed in algorithm of target detection faster-rcnn earliest , it is instead of traditional candidate region (region proposal) generating algorithm (such as sliding window), for generating detection Candidate region needed for algorithm.Its main feature is that operation efficiency height does not have redundant computation, the candidate region of high quality, pole can be generated Big improves the performance of related two-stage detection algorithm.
And the convolutional neural networks (densenet) intensively connected are the novel volumes that the best paper of CVPR20117 is proposed Product neural network structure, has inwardly used for reference resnet, and has carried out innovative improvement, and network structure is simple, but very high Effect, densenet is using multiplexing and transmitting of the convolutional network to feature of the intensive connection reinforcement between convolutional layer, to solve Many pain spots of deep layer network, such as gradient disperse, parameter amount is larger etc..The characteristic pattern obtained by feature extraction network is usual There is the semantic information of higher level, but resolution ratio is smaller, the original location information of image is lost seriously, and target positioning accurate is influenced Degree.
For the problems in the relevant technologies, currently no effective solution has been proposed.
Summary of the invention
For the problems in the relevant technologies, the present invention proposes a kind of based on dense connection convolutional neural networks target following calculation Method, by replacing traditional alexnet as feature extraction using densenet on the basis of the algorithm of the twin convolutional network of tradition Network is added RPN layers simultaneously, further improves track algorithm to the separating capacity of target and background, while tradition tracking being appointed Business is converted to the operation efficiency that single sample learning (one-shot) Detection task improves the algorithm reasoning stage, to overcome existing phase Above-mentioned technical problem present in the technology of pass.
The technical scheme of the present invention is realized as follows:
One kind being based on dense connection convolutional neural networks target tracking algorism, comprising the following steps:
S101, input picture feature is extracted: in advance by using dense connection convolutional neural networks as feature extraction net Network obtains and extracts output characteristics of image;
S103, output characteristic pattern processing: bilinear interpolation is carried out to extraction output characteristics of image is obtained, bilinearity is obtained and inserts The characteristic pattern of value;
S105, acquisition characteristic pattern is calculated;Will acquire bilinear interpolation characteristic pattern input respectively RPN layers classification branch and It returns branch and tracking target image and tracing area image is filtered and distinguished to the output of two branches, when algorithm exports It is coordinate of the target response value highest point on characteristic pattern as final characteristic pattern.
Further, image characteristics extraction is obtained using following formula in the step S101:
Xe=He ([X0,X1......Xe-1,]), wherein;
He is that network extracts characteristic operation, ([X0,X1......Xe-1,]) and it is to make the characteristic pattern of first layer to the last layer Channel merges, and Xe is characterized the output for extracting network.
Further, the step S101 output characteristics of image can be expressed asWithWherein;
Z is target image, and X is tracing area image,Convolutional neural networks intensively to connect extract characteristic operation.
Further, describedWithThe classification branch of RPN layers of input is carried out respectively with recurrence branch and to two Road output is filtered, and is obtained using following formula:
Wherein, A is that RPN exports characteristic pattern, the width and height of w and h flag output characteristic pattern, and k is characterized each pixel in figure The number of the boundingbox of the different scale and length-width ratio that are mapped in original image, * are correlation filtering operation.
Beneficial effects of the present invention:
1, the present invention promotes the ability in feature extraction of network by introducing densenet, is replaced using densenet Feature extraction network of the alexnet as twin network, densenet are a kind of with the convolutional neural networks intensively connected, Main thought is to connect all layers under the premise of maximum information is transmitted between layers in guaranteeing network, often Output of one layer of the input from all layers of front, the output of current layer also can be as the input of each convolutional layer later, networks Dense connection between each layer not only reduces the calculation amount of each layer, realizes the recycling of feature, and strengthens feature It propagates, alleviates the gradient diffusing phenomenon of deep layer network, further improve network performance.
2, the present invention improves the resolution ratio of characteristic pattern by interpolation algorithm, restores the position letter lost in characteristic extraction procedure Breath, to improve the positioning accuracy of algorithm.
3, the present invention is by joined RPN layer on the basis of conventional twin network trace algorithm, to track target and background into Further classification and recurrence are gone, introduced RPN layer is by supervised learning part and correlation filtering part structure through the invention At it is that preceding background class and target frame return that supervised learning part is made of recurrence branch and classification branch, correlation filtering portion Divide and be responsible for carrying out the characteristic pattern of target and tracing area in correlation filtering operation acquisition output response figure, while the present invention will be traditional Tracing task is converted for single pattern detection task, and efficiency of algorithm is improved.
In conclusion the present invention replaces the feature of the twin convolutional network of tradition to mention using the convolutional neural networks of dense connection Network is taken to make network obtain stronger ability in feature extraction, is made by the characteristic pattern exported to feature extraction network double Linear interpolation improves characteristic pattern resolution ratio, improves tracing algorithm positioning accuracy, in addition on the basis of traditional twin convolutional network RPN layers are added, enhances track algorithm to the separating capacity of target and background.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, below will be to institute in embodiment Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of process based on dense connection convolutional neural networks target tracking algorism according to an embodiment of the present invention Schematic diagram;
Fig. 2 is a kind of RPN based on dense connection convolutional neural networks target tracking algorism according to an embodiment of the present invention Scene application drawing;
Fig. 3 is according to an embodiment of the present invention a kind of based on dense connection convolutional neural networks target tracking algorism Densenet scene application drawing;
Fig. 4 is according to an embodiment of the present invention a kind of based on dense connection convolutional neural networks target tracking algorism Dense block scene application drawing.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art's every other embodiment obtained belong to what the present invention protected Range.
According to an embodiment of the invention, providing a kind of based on dense connection convolutional neural networks target tracking algorism.
As shown in Figs 1-4, according to an embodiment of the present invention to be based on dense connection convolutional neural networks target tracking algorism, packet Include following steps:
S101, input picture feature is extracted: in advance by using dense connection convolutional neural networks as feature extraction net Network obtains and extracts output characteristics of image;
S103, output characteristic pattern processing: bilinear interpolation is carried out to extraction output characteristics of image is obtained, bilinearity is obtained and inserts The characteristic pattern of value;
S105, acquisition characteristic pattern is calculated;Will acquire bilinear interpolation characteristic pattern input respectively RPN layers classification branch and It returns branch and tracking target image and tracing area image is filtered and distinguished to the output of two branches, when algorithm exports It is coordinate of the target response value highest point on characteristic pattern as final characteristic pattern.
With the aid of the technical scheme, it by promoting the ability in feature extraction of network by introducing densenet, uses Densenet replaces feature extraction network of the alexnet as twin network, and densenet is a kind of with the volume intensively connected Product neural network, main thought are will own under the premise of maximum information is transmitted between layers in guaranteeing network Layer connects, the output of each layer of input from all layers of front, the output of current layer also can as after each convolution The input of layer, the dense connection between each layer of network not only reduce the calculation amount of each layer, realize the recycling of feature, and And feature propagation is strengthened, the gradient diffusing phenomenon of deep layer network is alleviated, network performance is further improved.And pass through interpolation Algorithm improves the resolution ratio of characteristic pattern, the location information lost in characteristic extraction procedure is restored, to improve the positioning accurate of algorithm Degree.It joined RPN layers additionally by the basis of conventional twin network trace algorithm, tracking target and background carried out further Classification and recurrence, introduced RPN layer is made of supervised learning part and correlation filtering part through the invention, supervised learning Part is made of recurrence branch and classification branch, is that preceding background class and target frame return, correlation filtering part is responsible for mesh The characteristic pattern of mark and tracing area carries out correlation filtering operation and obtains output response figure, while the present invention turns traditional tracing task It has been changed to single pattern detection task, has improved efficiency of algorithm.
In addition, in one embodiment, image characteristics extraction is obtained using following formula in the step S101:
Xe=He ([X0, X1......Xe-1,]), wherein;
He is that network extracts characteristic operation, ([X0, X1......Xe-1,]) and it is to make the characteristic pattern of first layer to the last layer Channel merges, and Xe is characterized the output for extracting network.
In addition, in one embodiment, the step S101 output characteristics of image can be expressed asWith Wherein;
Z is target image, and X is tracing area image,Convolutional neural networks intensively to connect extract characteristic operation.
In addition, in one embodiment, it is describedWithClassification branch and the recurrence of RPN layers of input are carried out respectively Branch is simultaneously filtered the output of two branches, is obtained using following formula:
Wherein, A is that RPN exports characteristic pattern, the width and height of w and h flag output characteristic pattern, and k is characterized each pixel in figure The number of the boundingbox of the different scale and length-width ratio that are mapped in original image, * are correlation filtering operation.
In addition, in one embodiment, for above-mentioned bilinear interpolation, its core concept of bilinear interpolation tangible two Once linear interpolation is carried out on a direction respectively, such as expects unknown function f in the value of point P=(x, y), it is assumed that known function F is in Q11=(x1, y1), Q12=(x1, y2), Q21=(x2, y1), Q22=(x2, y2) four points value.
It carries out linear interpolation in the x direction first, obtains:
Then linear interpolation is carried out on the direction y again, obtained:
Then the approximation of point f (x, y) is just obtained
In addition, in one embodiment, for above-mentioned recurrence branch (reg) and classification branch (cls), due to being added RPN layers, so use multitask loss function as the loss function of this algorithm, wherein classification branch use cross entropy as Loss function returns branch and uses smooth L1loss as loss function.Algorithm training process can be divided into following steps:
Step S1: assuming that Ax, Ay, Aw, Ah indicate the centre coordinate and wide high, Tx, Ty of anchor, Tw, Th expression The centre coordinate of ground truth and wide height.Therefore normalized cumulant can be indicated by following mathematic(al) representations:
Step S2: the mathematic(al) representation of known smooth L1loss are as follows:
Branch rood loss function must be returned by training step S1 are as follows:
Step S3: by above-mentioned steps S1, S2 and multitask loss function define available of the invention obtain and finally lose Function are as follows:
Loss=Lcls+λLreg
Wherein Lcls is classification branch rood cross entropy loss function, and Lreg, which is that step S2 is described, obtains smooth L1 loss Function, lambda parameter are hyper parameter for balanced sort and return two rood weights.
In addition to this, traditional tracing task can be converted to single sample (one-shot) detection by following steps derivation to appoint Business.
Step S1, it is assumed that algorithm uses loss function of the average loss as network, therefore the main target of training process Weight matrix W is exactly found, following mathematic(al) representation is set up:
Wherein, xi is i-th of input sample, ψ (xi;W) indicate that anticipation function, li indicate the actual value of i-th of sample.Step Rapid S2, it will be assumed that function ω is feedforward equation, by (z;W it) is mapped as W, therefore the mathematic(al) representation in step S1 can indicate Are as follows:
Wherein, zi is i-th of template sample, i.e. target area image.
Step S3, it is assumed that function isTwin network extracts characteristic operation, and ζ is RPN layers of operation, therefore mathematics in S step 2 Expression formula can convert are as follows:
In addition, in one embodiment, for the RPN (region proposal network), RPN layers of production During raw candidate region, using a variety of policy selection candidate regions as abandoned the candidate region of range image center too far, Cosine Window penalty term is added to candidate region again ranking and non-maxima suppression (NMS) etc..In the algorithm forward inference stage, Target area in first frame is admitted to template branch and generates cls, reg convolution kernel, later template branch It is removed, residue frame passes sequentially through detection branch and cls, and reg convolution kernel, which calculates, generates final characteristic pattern.
In addition, in one embodiment, implement this algorithm on VOT2015 data set, and with other state-of-the- The track algorithm (track algorithm including tradition based on twin network) of art compares performance indexes, as a result as shown in the table:
In addition, in one embodiment, implement this algorithm on VOT2016 data set, and with other state-of-the- The track algorithm (track algorithm including tradition based on twin network) of art compares performance indexes, as a result as shown in the table:
In conclusion by means of above-mentioned technical proposal of the invention, it can be achieved that following effect:
1, the present invention promotes the ability in feature extraction of network by introducing densenet, is replaced using densenet Feature extraction network of the alexnet as twin network, densenet are a kind of with the convolutional neural networks intensively connected, Main thought is to connect all layers under the premise of maximum information is transmitted between layers in guaranteeing network, often Output of one layer of the input from all layers of front, the output of current layer also can be as the input of each convolutional layer later, networks Dense connection between each layer not only reduces the calculation amount of each layer, realizes the recycling of feature, and strengthens feature It propagates, alleviates the gradient diffusing phenomenon of deep layer network, further improve network performance.
2, the present invention improves the resolution ratio of characteristic pattern by interpolation algorithm, restores the position letter lost in characteristic extraction procedure Breath, to improve the positioning accuracy of algorithm.
3, the present invention is by joined RPN layer on the basis of conventional twin network trace algorithm, to track target and background into Further classification and recurrence are gone, introduced RPN layer is by supervised learning part and correlation filtering part structure through the invention At it is that preceding background class and target frame return that supervised learning part is made of recurrence branch and classification branch, correlation filtering portion Divide and be responsible for carrying out the characteristic pattern of target and tracing area in correlation filtering operation acquisition output response figure, while the present invention will be traditional Tracing task is converted for single pattern detection task, and efficiency of algorithm is improved.
In conclusion the present invention uses densenet to replace alexnet as the feature extraction network of twin convolutional network, mention The high performance of network, while bilinear interpolation has been carried out to the characteristic pattern of output, the resolution ratio of characteristic pattern is improved, sufficiently The information in characteristic pattern is utilized.And RPN layers are increased on the basis of twin convolutional network structure, is increasing track algorithm While to the separating capacity of target and background, traditional tracing task is converted into single sample (one-shot) Detection task, Improving the operational efficiency of algorithm, accuracy and speed has great promotion compared to traditional track algorithm.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (4)

1. one kind is based on dense connection convolutional neural networks target tracking algorism, which comprises the following steps:
S101, it extracts input picture feature: being obtained in advance by using dense connection convolutional neural networks as feature extraction network Take extraction output characteristics of image;
S103, output characteristic pattern processing: bilinear interpolation is carried out to extraction output characteristics of image is obtained, obtains bilinear interpolation Characteristic pattern;
S105, acquisition characteristic pattern is calculated;The characteristic pattern that will acquire bilinear interpolation inputs RPN layers of classification branch and recurrence respectively Branch is simultaneously filtered to the output of two branches and distinguishes tracking target image and tracing area image, when algorithm output is mesh Coordinate of the response highest point on characteristic pattern is marked as final characteristic pattern.
2. according to claim 1 be based on dense connection convolutional neural networks target tracking algorism, which is characterized in that described Image characteristics extraction is obtained using following formula in step S101:
Xe=He ([X0,X1......Xe-1,]), wherein;
He is that network extracts characteristic operation, ([X0,X1......Xe-1,]) and it is that the characteristic pattern of first layer to the last layer is made into channel Merge, Xe is characterized the output for extracting network.
3. according to claim 1 be based on dense connection convolutional neural networks target tracking algorism, which is characterized in that described Step S101 output characteristics of image can be expressed asWithWherein;
Z is target image, and X is tracing area image,Convolutional neural networks intensively to connect extract characteristic operation.
4. according to claim 3 be based on dense connection convolutional neural networks target tracking algorism, which is characterized in that describedWithThe classification branch of RPN layers of input is carried out respectively and returns branch and the output of two branches is filtered, and is adopted It is obtained with following formula:
Wherein, A is that RPN exports characteristic pattern, the width and height of w and h flag output characteristic pattern, and k is characterized each pixel-map in figure The number of the boundingbox of different scale and length-width ratio into original image, * are correlation filtering operation.
CN201910425297.9A 2019-05-21 2019-05-21 One kind being based on dense connection convolutional neural networks target tracking algorism Pending CN110188753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910425297.9A CN110188753A (en) 2019-05-21 2019-05-21 One kind being based on dense connection convolutional neural networks target tracking algorism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425297.9A CN110188753A (en) 2019-05-21 2019-05-21 One kind being based on dense connection convolutional neural networks target tracking algorism

Publications (1)

Publication Number Publication Date
CN110188753A true CN110188753A (en) 2019-08-30

Family

ID=67717107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425297.9A Pending CN110188753A (en) 2019-05-21 2019-05-21 One kind being based on dense connection convolutional neural networks target tracking algorism

Country Status (1)

Country Link
CN (1) CN110188753A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689556A (en) * 2019-09-09 2020-01-14 苏州臻迪智能科技有限公司 Tracking method and device and intelligent equipment
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN111091585A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Target tracking method, device and storage medium
CN111415373A (en) * 2020-03-20 2020-07-14 北京以萨技术股份有限公司 Target tracking and segmenting method, system and medium based on twin convolutional network
CN111724409A (en) * 2020-05-18 2020-09-29 浙江工业大学 Target tracking method based on densely connected twin neural network
CN112749602A (en) * 2019-10-31 2021-05-04 北京市商汤科技开发有限公司 Target query method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182388A (en) * 2017-12-14 2018-06-19 哈尔滨工业大学(威海) A kind of motion target tracking method based on image
CN109035558A (en) * 2018-06-12 2018-12-18 武汉市哈哈便利科技有限公司 A kind of commodity recognizer on-line study system for self-service cabinet
CN109410251A (en) * 2018-11-19 2019-03-01 南京邮电大学 Method for tracking target based on dense connection convolutional network
CN109726657A (en) * 2018-12-21 2019-05-07 万达信息股份有限公司 A kind of deep learning scene text recognition sequence method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182388A (en) * 2017-12-14 2018-06-19 哈尔滨工业大学(威海) A kind of motion target tracking method based on image
CN109035558A (en) * 2018-06-12 2018-12-18 武汉市哈哈便利科技有限公司 A kind of commodity recognizer on-line study system for self-service cabinet
CN109410251A (en) * 2018-11-19 2019-03-01 南京邮电大学 Method for tracking target based on dense connection convolutional network
CN109726657A (en) * 2018-12-21 2019-05-07 万达信息股份有限公司 A kind of deep learning scene text recognition sequence method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BO LI 等: "High Performance Visual Tracking with Siamese Region Proposal Network", 《2018 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
BO LI 等: "SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks", 《ARXIV》 *
GAO HUANG 等: "Densely Connected Convolutional Networks", 《ARXIV》 *
MOHAMED H. ABDELPAKEY 等: "DensSiam: End-to-End Densely-Siamese Network with Self-Attention Model for Object Tracking", 《ARXIV》 *
郭智 等: "基于深度卷积神经网络的遥感图像飞机目标检测方法", 《电子与信息学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689556A (en) * 2019-09-09 2020-01-14 苏州臻迪智能科技有限公司 Tracking method and device and intelligent equipment
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN110689559B (en) * 2019-09-30 2022-08-12 长安大学 Visual target tracking method based on dense convolutional network characteristics
CN112749602A (en) * 2019-10-31 2021-05-04 北京市商汤科技开发有限公司 Target query method, device, equipment and storage medium
CN111091585A (en) * 2020-03-19 2020-05-01 腾讯科技(深圳)有限公司 Target tracking method, device and storage medium
CN111091585B (en) * 2020-03-19 2020-07-17 腾讯科技(深圳)有限公司 Target tracking method, device and storage medium
CN111415373A (en) * 2020-03-20 2020-07-14 北京以萨技术股份有限公司 Target tracking and segmenting method, system and medium based on twin convolutional network
CN111724409A (en) * 2020-05-18 2020-09-29 浙江工业大学 Target tracking method based on densely connected twin neural network

Similar Documents

Publication Publication Date Title
CN110188753A (en) One kind being based on dense connection convolutional neural networks target tracking algorism
CN109949255B (en) Image reconstruction method and device
CN101739712B (en) Video-based 3D human face expression cartoon driving method
CN109543754A (en) The parallel method of target detection and semantic segmentation based on end-to-end deep learning
CN108319972A (en) A kind of end-to-end difference online learning methods for image, semantic segmentation
CN107563381A (en) The object detection method of multiple features fusion based on full convolutional network
CN107704877A (en) A kind of image privacy cognitive method based on deep learning
CN107609525A (en) Remote Sensing Target detection method based on Pruning strategy structure convolutional neural networks
CN111767847B (en) Pedestrian multi-target tracking method integrating target detection and association
CN109829934A (en) A kind of novel image tracking algorithm based on twin convolutional network
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN110334779A (en) A kind of multi-focus image fusing method based on PSPNet detail extraction
CN104751172B (en) The sorting technique of Polarimetric SAR Image based on denoising autocoding
CN110473231A (en) A kind of method for tracking target of the twin full convolutional network with anticipation formula study more new strategy
CN111695523B (en) Double-flow convolutional neural network action recognition method based on skeleton space-time and dynamic information
CN104376334B (en) A kind of pedestrian comparison method of multi-scale feature fusion
CN111814889A (en) Single-stage target detection method using anchor-frame-free module and enhanced classifier
CN106991380A (en) A kind of preprocess method based on vena metacarpea image
CN114419413A (en) Method for constructing sensing field self-adaptive transformer substation insulator defect detection neural network
Meng et al. Hyperspectral image classification with mixed link networks
CN108268603A (en) A kind of community discovery method based on core member's identification
CN111274905A (en) AlexNet and SVM combined satellite remote sensing image land use change detection method
CN113706581A (en) Target tracking method based on residual channel attention and multilevel classification regression
CN107944437A (en) A kind of Face detection method based on neutral net and integral image
CN114550308B (en) Human skeleton action recognition method based on space-time diagram

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190830