CN109461162A - The method of Target Segmentation in image - Google Patents

The method of Target Segmentation in image Download PDF

Info

Publication number
CN109461162A
CN109461162A CN201811478643.1A CN201811478643A CN109461162A CN 109461162 A CN109461162 A CN 109461162A CN 201811478643 A CN201811478643 A CN 201811478643A CN 109461162 A CN109461162 A CN 109461162A
Authority
CN
China
Prior art keywords
segmentation
target
network
form parameter
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811478643.1A
Other languages
Chinese (zh)
Other versions
CN109461162B (en
Inventor
张勇东
闵少波
谢洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201811478643.1A priority Critical patent/CN109461162B/en
Publication of CN109461162A publication Critical patent/CN109461162A/en
Application granted granted Critical
Publication of CN109461162B publication Critical patent/CN109461162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of methods of Target Segmentation in image, comprising: is handled by the full convolutional network of trained multitask the image of input, obtains segmentation result and form parameter prediction result;Form parameter prediction result is optimized by separating maximum pondization operation;Based on segmentation convergence strategy, carry out Optimized Segmentation using the form parameter prediction result after optimization as a result, to realize Target Segmentation.This method smooth segmenting edge and can solve adhesion problems, segmentation effect is obviously due to traditional scheme based on the realization Target Segmentation of shape constraining, by being verified on different biological data collection.

Description

The method of Target Segmentation in image
Technical field
The present invention relates to a kind of methods of Target Segmentation in technical field of image processing more particularly to image.
Background technique
Target Segmentation algorithm obtains extensive concern between in recent years, task be in an image, will be interested Target area split, obtain with the different label of background.Since Target Segmentation is one of the basis of scene understanding, because This task has wide application scenarios in fields in automatic Pilot, medical image analysis etc..
In numerous Target Segmentation methods, convolutional neural networks are widely applied to extract image, semantic information.It is logical Cross simulation human visual perception structure, the feature representation that convolutional neural networks can be optimal according to mission requirements autonomous learning, To reach better segmentation effect.However, current method still can not solve Roughen Edges and adhesion in Target Segmentation Problem.
Summary of the invention
The object of the present invention is to provide a kind of methods of Target Segmentation in image, can smooth segmenting edge and solution adhesion Problem.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of method of Target Segmentation in image characterized by comprising
The image of input is handled by trained multitask full convolutional network, segmentation result is obtained and shape is joined Number prediction result;
Form parameter prediction result is optimized by separating maximum pondization operation;
Based on segmentation convergence strategy, carry out Optimized Segmentation using the form parameter prediction result after optimization as a result, to realize Target Segmentation.
As seen from the above technical solution provided by the invention, can be led to based on the realization Target Segmentation of shape constraining It crosses on different biological data collection and is verified, smooth segmenting edge and adhesion problems can be solved, segmentation effect is obviously excellent In traditional scheme.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the flow chart of the method for Target Segmentation in a kind of image provided in an embodiment of the present invention;
Fig. 2 is the maximum pond operation chart of separation provided in an embodiment of the present invention;
Fig. 3 is segmentation convergence strategy schematic diagram provided in an embodiment of the present invention.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on this The embodiment of invention, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, belongs to protection scope of the present invention.
The embodiment of the present invention is a kind of method of Target Segmentation in image, the shape prior knowledge by target that can be explicit It is dissolved into network structure;Mainly include three parts: the full convolutional network of multitask, separates maximum pond, and segmentation fusion Strategy.The full convolutional network of multitask uses full convolutional network (FCN) model, is generally based on the feature extraction of VGG-16.In order to The explicit expression shape constraining in a network of energy, the full convolutional network of multitask can simultaneously be split image, and to figure Each target object predicts one group of form parameter as in.By being subject to different definition, such as angle, length and width to form parameter The shape of a standard can be depicted Deng, form parameter, such as oval.Obtained segmentation result and form parameter can be each other Supplement, optimization each other, are finally reached the problem of smooth edges are with adhesion is separated.However, form parameter is actually difficult prediction standard Really, therefore we promote the prediction accuracy of form parameter by separating maximum pondization operation.By analysis segmentation result and Relevance between parameter prediction, some insecure Shape Predictions can be got rid of by separating maximum pondization, and a reservation more has can Can accurately form parameter, thus preferably be used to Optimized Segmentation result.The mesh of prediction is used finally by segmentation convergence strategy The segmentation result that mark form parameter optimizes.Usually, the lesion of biological data will lead to target shape and deviate considerably from me Shape prior.Therefore in this case, the target shape that segmentation result obtains is relatively reliable, because form parameter is described The excessively standard of target shape out.And for most of normal data, shape right and wrong that form parameter obtains are often with having It is informative.Based on considerations above, segmentation convergence strategy proposed by the present invention retains segmentation result while can be adaptive The systematicness of changeability and form parameter carrys out maximum possible and optimizes final object segmentation result.The scheme of the embodiment of the present invention Mainly following three points:
1) effectively introducing shape constraining in a network does target shape partitioning algorithm.
2) it separates maximum pondization and operates segmentation and the parameter prediction part that can optimize in multitask network.
3) segmentation convergence strategy can be neatly with the shape constraining Optimized Segmentation result of prediction.
As shown in Figure 1, the method for Target Segmentation mainly includes the following steps: in a kind of image provided in an embodiment of the present invention
1, the image of input is handled by trained multitask full convolutional network, obtains segmentation result and shape Parameter prediction result.
In the embodiment of the present invention, the full convolutional network of multitask (Multi-task FCN) includes seven groups of convolutional layer knots Structure includes multiple convolutional layers and ReLU activation primitive in every group of structure, a maximum pond layer is inserted between group and group;First five set Convolution nuclear volume inside convolution block layer is the same, they are sequentially connected in series, and with the intensification at network, the convolution nucleus number of difference group Amount can be incremented by successively;First five set convolution block layer is enabled to be reduced to the ConvNet in Fig. 1, the characteristic pattern that they obtain is defined as Xi, then Segmentation result P and form parameter prediction { T } are respectively by remaining two groups of convolution block layers according to characteristic pattern XiPrediction obtains.
Illustratively, it can be realized by VGG-16 structure, the port number of the output feature of five groups of convolutional coding structures is successively are as follows: 64,128,256,512,512。
In the embodiment of the present invention, each of segmentation result element is the numerical value of one [0,1], if it is greater than 0.5 It indicates that the pixel belongs to target area, indicates that the pixel value is background area if it is less than 0.5.
In the embodiment of the present invention, in the training stage of the full convolutional network of multitask, it is assumed that know using elliptical shape as priori Know, it is predicted that the form parameter of ith pixel point be denoted as Ti, every time when training obtained form parameter prediction result be θ, μcc,a,b};Wherein θ indicates elliptical tilt angle;μccIndicate elliptical centre coordinate;A, b indicate elliptical major and minor axis Length;Final each pixel, which has, understands this 5 form parameters, respectively indicates are as follows:
Wherein, { μ, ν } is the space coordinate of pixel, and H and W are the length and width of image;Wherein
In the embodiment of the present invention, the target loss function representation of the full convolutional network of multitask are as follows:
Wherein, N is pixel number, PiFor the segmentation predicted value of ith pixel point, Pi∈P;Tk.iIndicate TiIn kth A form parameter;WithCorresponding expression PiWith Tk.iTrue value;λ is balance parameters, LclsIt is softmax Classification Loss, Lcls=-∑iPilnPi;LregIt is L general in target detection1Smoothness constraint error:
In addition, in the training stage of the full convolutional network of multitask, by the data in data set carried out fold, scaling and with Machine such as cuts at the operation of data augmentation, then data is upset, (such as batch size=8) and fixed dimension in batches, to constitute Training set.
When training, network parameter is trained as optimizer using stochastic gradient descent method;Illustratively, learning rate declines Subtract policy selection exponential decay, initial learning rate is 0.01.In addition the ratio of Dropout is 0.5, L2 in regularization operation The coefficient of penalty term is 0.0005.
For the initial value of hyper parameters all in network, using MSRA initial method, principle is will be every layer in network Weight parameter be initialized as meetingNormal distribution, wherein n be this layer of weight parameter number.In network Regularization operation L2 punishment be based on the Gaussian prior of network parameter hypothesis punish it, so in end-to-end instruction The initial method can improve network training efficiency and improve network performance in white silk.
2, form parameter prediction result is optimized by separating maximum pond (Split Max Pooling) operation. When we obtain segmentation result P and form parameter prediction { T }, we are optimized using separation maximum Chi Hualai.
Separate pond formula when maximum pondization operation are as follows:
Wherein, NiFor the region that ith pixel point closes on, illustrative NiFor the area of 3 × 3 pixel sizes near pixel i Domain.By separating maximum pondization operation for NiMiddle maximum PiAnd its corresponding TiTravel to (the i.e. segmentation in Fig. 1 of next layer network Convergence strategy layer), to realize the optimization of form parameter.
An example for separating maximum pondization operation is as shown in Figure 2: input T and P is carried out with the window of 3 × 3 sizes respectively It slides, the T (12) of maximum numerical value (0.7) corresponding position in window the inside, can be used as output, remain in only P.
Separate the training process that maximum pondization operation also assists in the full convolutional network of multitask, in which:
In back-propagation process, for TiExpression formula are as follows:
Wherein, the target loss function for the full convolutional network of multitask that L is mentioned before indicating;M is NiPicture inside window Plain number;
In back-propagation process, for PiExpression formula are as follows:
Wherein, α indicates a hyper parameter, determines optimum value according to experimental analysis.Pay attention to PiGradient by two parts group At a part is the P ' conduction directly exported in Fig. 2, and a part is the output T conduction from figure two.Therefore formula InStatement Fig. 2 in export P ' conduction gradient, behind oneIt is conducted through for T The gradient come.
As shown in Fig. 2, identical (the P ' of segmentation predicted value of P ' pixel identical namely each with the content of PiIt is interior Appearance and PiIt is identical), therefore, gradientWith gradientContent it is identical.
It will be understood by those skilled in the art that two outputs can be generated during forward-propagating, after input data, one Output is its own, another exports the data influenced by input data;Specific in the present invention, input gradientMeeting There are two output, an output is itself (to be expressed as to distinguish), another output is input gradientIt is influenced The gradient for the form parameter prediction arrived, i.e.,
Above formula is back-propagation process expression formula, and input, output are exchanged, thus are output on the left of equal sign, and right side is input, It is opposite with forward-propagating process.
3, based on segmentation convergence strategy (Piecewise Fusion), come using the form parameter prediction result after optimization excellent Change segmentation result, to realize Target Segmentation.
All segmentation result P are not done and optimized in the embodiment of the present invention, because will lead to all segmentation shapes like that Shape all tends to standard, therefore, only to part PiUse form parameter as optimization;Two threshold taus are set1And τ2, to be in this two Pixel among a threshold value optimizes:
Wherein:
D μ=cos (θ) (μ-μc)+sin(θ)(ν-νc)
D ν=- sin (θ) (μ-μc)+cos(θ)(ν-νc)。
Convergence strategy is segmented as shown in figure 3, the threshold tau wherein provided1And τ2Value be only for example.
In order to verify the effect of above scheme of the embodiment of the present invention, tested in two biocriteria data sets.
1) Synaptic vesicle dataset: the data set is contained from 100 high-resolution (1019*1053) Nerve synapse electron microscope picture, and have expert mark label as supervision message.It is cut by data, it is final to produce 7322 training datas and 1465 test datas are given birth to.Our target object is the imitated vesicle structure in nerve synapse.Big portion More regular ellipse is presented in branch sac bubble structure.
2) Gland Segmentation Challenge Contest: the data set includes the picture number of human body body of gland According to including parts of lesions and normal.Wherein there are 85 pictures for training, 80 for testing.Normal human body body of gland Shape oval in shape, and the body of gland shape of lesion is then less regular.The task is body of gland area all in segmentation object image Domain.
By the training of 240 epoch, network all achieves current best knot in two biocriteria data sets Fruit.Including two biomedical data collection (electron cryo-microscopy data and gland cell data), segmentation result IoU (intersection Union area in area ratio) it is respectively 83.77% and 85.60%;The effect is obviously due to traditional scheme.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment can The mode of necessary general hardware platform can also be added to realize by software by software realization.Based on this understanding, The technical solution of above-described embodiment can be embodied in the form of software products, which can store non-easy at one In the property lost storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are with so that a computer is set Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Within the technical scope of the present disclosure, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (9)

1. a kind of method of Target Segmentation in image characterized by comprising
The image of input is handled by trained multitask full convolutional network, it is pre- with form parameter to obtain segmentation result Survey result;
Form parameter prediction result is optimized by separating maximum pondization operation;
Based on segmentation convergence strategy, carry out Optimized Segmentation using the form parameter prediction result after optimization as a result, to realize target Segmentation.
2. the method for Target Segmentation in a kind of image according to claim 1, which is characterized in that the full convolution of multitask Network includes seven groups of convolutional layer structures, includes multiple convolutional layers and ReLU activation primitive in every group of structure, is inserted between group and group One maximum pond layer;Convolution nuclear volume inside first five set convolution block layer is the same, they are sequentially connected in series, and with network Deepen, the convolution nuclear volume of difference group can be incremented by successively;
Characteristic pattern X is obtained by first five set convolution block layeri, by remaining two groups of convolution block layers respectively according to characteristic pattern XiIt measures in advance To segmentation result P and form parameter prediction { T }.
3. the method for Target Segmentation in a kind of image according to claim 1 or 2, which is characterized in that in segmentation result Each element is the numerical value of one [0,1], indicates that the pixel belongs to target area if it is greater than 0.5, if it is less than 0.5 Then indicate that the pixel value is background area.
4. the method for Target Segmentation in a kind of image according to claim 1 or 2, which is characterized in that rolled up entirely in multitask The training stage of product network, it is assumed that using elliptical shape as priori knowledge, it is predicted that the form parameter of ith pixel point be denoted as Ti, the form parameter prediction result obtained when training every time is { θ, μc,vc,a,b};Wherein θ indicates elliptical tilt angle;μc, νcIndicate elliptical centre coordinate;A, b indicate elliptical length shaft length;Final each pixel, which has, understands this 5 shape ginsengs Number, respectively indicates are as follows:
Wherein, { μ, ν } is the space coordinate of pixel, and H and W are the length and width of image.
5. the method for Target Segmentation in a kind of image according to claim 4, which is characterized in that the full convolutional network of multitask Target loss function representation are as follows:
Wherein, N is pixel number, PiFor the segmentation predicted value of ith pixel point, Pi∈P;Tk.iIndicate TiIn kth shape Shape parameter;WithCorresponding expression PiWith Tk.iTrue value;λ is balance parameters, LclsIt is softmax Classification Loss, Lreg It is L general in target detection1Smoothness constraint error.
6. the method for Target Segmentation in a kind of image according to claim 4, which is characterized in that in the full convolution net of multitask The training stage of network, by the data in data set carried out comprising turn down, stretch and/or the data augmentation of random cropping operate, Data are upset again, in batches and fixed dimension, thus composing training collection;
When training, network parameter is trained as optimizer using stochastic gradient descent method;For hyper parameters all in network Initial value uses MSRA initial method.
7. the method for Target Segmentation in a kind of image according to claim 2, which is characterized in that separate maximum pondization operation When pond formula are as follows:
Wherein, NiFor the region that ith pixel point closes on, operated by separating maximum pondization by NiMiddle maximum PiAnd its corresponding Ti It propagates downwards and executes segmentation convergence strategy.
8. the method for Target Segmentation in a kind of image according to claim 7, which is characterized in that
In back-propagation process, for TiExpression formula are as follows:
Wherein, L indicates the target loss function of the full convolutional network of multitask;M is NiNumber of pixels inside window;
In back-propagation process, for PiExpression formula are as follows:
Wherein, α indicates a hyper parameter, P 'iContent and PiIt is identical.
9. the method for Target Segmentation in a kind of image according to claim 2, which is characterized in that described based on segmentation fusion Strategy, carrying out Optimized Segmentation result using the form parameter prediction result after optimization includes:
Two threshold taus are set1And τ2, the pixel being among the two threshold values is optimized:
Wherein:
D μ=cos (θ) (μ-μc)+sin(θ)(ν-νc)
Dv=-sin (θ) (μ-μc)+cos(θ)(ν-vc)。
CN201811478643.1A 2018-12-03 2018-12-03 Method for segmenting target in image Active CN109461162B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811478643.1A CN109461162B (en) 2018-12-03 2018-12-03 Method for segmenting target in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811478643.1A CN109461162B (en) 2018-12-03 2018-12-03 Method for segmenting target in image

Publications (2)

Publication Number Publication Date
CN109461162A true CN109461162A (en) 2019-03-12
CN109461162B CN109461162B (en) 2020-05-12

Family

ID=65612421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811478643.1A Active CN109461162B (en) 2018-12-03 2018-12-03 Method for segmenting target in image

Country Status (1)

Country Link
CN (1) CN109461162B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378916A (en) * 2019-07-03 2019-10-25 浙江大学 A kind of TBM image based on multitask deep learning is slagged tap dividing method
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413347A (en) * 2013-07-05 2013-11-27 南京邮电大学 Extraction method of monocular image depth map based on foreground and background fusion
CN103839244A (en) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 Real-time image fusion method and device
US20160098833A1 (en) * 2014-10-06 2016-04-07 Technion Research & Development Foundation Limited System and Method for Measurement of Myocardial Mechanical Function
CN105931226A (en) * 2016-04-14 2016-09-07 南京信息工程大学 Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting
CN107492121A (en) * 2017-07-03 2017-12-19 广州新节奏智能科技股份有限公司 A kind of two-dimension human body bone independent positioning method of monocular depth video
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN108664971A (en) * 2018-05-22 2018-10-16 中国科学技术大学 Pulmonary nodule detection method based on 2D convolutional neural networks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413347A (en) * 2013-07-05 2013-11-27 南京邮电大学 Extraction method of monocular image depth map based on foreground and background fusion
CN103839244A (en) * 2014-02-26 2014-06-04 南京第五十五所技术开发有限公司 Real-time image fusion method and device
US20160098833A1 (en) * 2014-10-06 2016-04-07 Technion Research & Development Foundation Limited System and Method for Measurement of Myocardial Mechanical Function
CN105931226A (en) * 2016-04-14 2016-09-07 南京信息工程大学 Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting
US20180240235A1 (en) * 2017-02-23 2018-08-23 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN107492121A (en) * 2017-07-03 2017-12-19 广州新节奏智能科技股份有限公司 A kind of two-dimension human body bone independent positioning method of monocular depth video
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108664971A (en) * 2018-05-22 2018-10-16 中国科学技术大学 Pulmonary nodule detection method based on 2D convolutional neural networks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378916A (en) * 2019-07-03 2019-10-25 浙江大学 A kind of TBM image based on multitask deep learning is slagged tap dividing method
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device

Also Published As

Publication number Publication date
CN109461162B (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN104978580B (en) A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity
CN110188635B (en) Plant disease and insect pest identification method based on attention mechanism and multi-level convolution characteristics
CN108491880B (en) Object classification and pose estimation method based on neural network
CN110210551A (en) A kind of visual target tracking method based on adaptive main body sensitivity
CN114202696A (en) SAR target detection method and device based on context vision and storage medium
CN110502988A (en) Group positioning and anomaly detection method in video
CN110321910A (en) Feature extracting method, device and equipment towards cloud
CN109765462A (en) Fault detection method, device and the terminal device of transmission line of electricity
CN110532859A (en) Remote Sensing Target detection method based on depth evolution beta pruning convolution net
CN113569667B (en) Inland ship target identification method and system based on lightweight neural network model
CN109034210A (en) Object detection method based on super Fusion Features Yu multi-Scale Pyramid network
CN107680106A (en) A kind of conspicuousness object detection method based on Faster R CNN
CN110991362A (en) Pedestrian detection model based on attention mechanism
CN109558806A (en) The detection method and system of high score Remote Sensing Imagery Change
CN105631398A (en) Method and apparatus for recognizing object, and method and apparatus for training recognizer
CN110163813A (en) A kind of image rain removing method, device, readable storage medium storing program for executing and terminal device
CN112633209B (en) Human action recognition method based on graph convolution neural network
CN110378297A (en) A kind of Remote Sensing Target detection method based on deep learning
CN110222760A (en) A kind of fast image processing method based on winograd algorithm
CN109858487A (en) Weakly supervised semantic segmentation method based on watershed algorithm and image category label
CN111488805B (en) Video behavior recognition method based on salient feature extraction
Li et al. Transmission line detection in aerial images: An instance segmentation approach based on multitask neural networks
CN116721112B (en) Underwater camouflage object image segmentation method based on double-branch decoder network
CN112489050A (en) Semi-supervised instance segmentation algorithm based on feature migration
CN107506792A (en) A kind of semi-supervised notable method for checking object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant