CN110942095A - Method and system for detecting salient object area - Google Patents

Method and system for detecting salient object area Download PDF

Info

Publication number
CN110942095A
CN110942095A CN201911178510.7A CN201911178510A CN110942095A CN 110942095 A CN110942095 A CN 110942095A CN 201911178510 A CN201911178510 A CN 201911178510A CN 110942095 A CN110942095 A CN 110942095A
Authority
CN
China
Prior art keywords
image
map
detected
primary
carrying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911178510.7A
Other languages
Chinese (zh)
Inventor
罗永康
王鹏
黎万义
孙佳
席铉洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201911178510.7A priority Critical patent/CN110942095A/en
Publication of CN110942095A publication Critical patent/CN110942095A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and a system for detecting a salient object region, wherein the detection method comprises the following steps: extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map; carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map; and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected. According to the method, the multi-level feature map is obtained by extracting the feature map through the multi-layer convolution network, the primary significant map is obtained by carrying out significant value nonlinear regression processing, and the final significant map matched with the image to be detected is obtained by carrying out three times of bilinear interpolation processing, so that end-to-end feature extraction is realized, and the efficiency of extracting the significant object region of the image can be improved.

Description

Method and system for detecting salient object area
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for detecting a salient object area.
Background
In the practical application process of unmanned vehicles, unmanned combat reconnaissance equipment, robots and the like, a computer system of the unmanned combat reconnaissance equipment is often required to be capable of making quick response to scenes with transient changes. This requires their computer systems to have efficient visual perception capabilities. The human visual system has attention function, so that the human visual system has high visual perception capability. Accordingly, researchers have proposed a saliency detection model inspired on human visual attention mechanisms, which is a preprocessing process of a visual task and widely applied to high-level complex visual processing tasks in the fields of computer vision, image processing, robots and the like, such as object detection and recognition, visual target tracking, hyperspectral image classification, image compression, robot perception, and the like.
In recent years, a plurality of salient object area detection methods based on a deep network are continuously proposed, and the methods obtain good detection effect. The existing method generally comprises complicated pre-treatment and post-treatment processes in order to obtain a satisfactory detection result. For example, the process of extracting a region by a method such as superpixel segmentation or candidate object extraction is time-consuming, and the resulting region may be under-segmented or over-segmented. Meanwhile, the introduction of these region extraction processes leads to a complex detection process, and it is difficult to achieve end-to-end significance detection. Meanwhile, in order to obtain an accurate saliency map, the model often comprises a multi-level recursive network structure, so that the network structure is very complex, the network model is large, the requirement on storage resources of a processing machine is high, the training and testing time is long due to large model parameters, and the number of training and labeling samples is huge.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to improve the efficiency of extracting the salient object region of the image, the present invention provides a salient object region detection method and system.
In order to solve the technical problems, the invention provides the following scheme:
a salient object region detection method, the detection method comprising:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
Optionally, the multi-layer convolutional network is a visual geometry group VGG-16 or a residual error network RES-101.
Optionally, the performing significant value nonlinear regression processing on the multi-level feature map to obtain a primary significant map specifically includes:
and (3) carrying out non-linear regression processing on the significance values from the feature map to the pixel level on the multi-level feature map by utilizing the three-layer cascaded full convolution layer to obtain a primary significance map.
Optionally, a modified linear unit and a discarding layer are connected between all full convolution layers of the three-layer cascade.
Optionally, the primary saliency map is obtained according to the following formula:
Figure BDA0002290642650000021
Figure BDA0002290642650000022
Figure BDA0002290642650000023
wherein L (X, Y, θ, β) is a loss function of the significant nonlinear regression network, N is the number of pixels, X is the image to be detected, and X is{xi|i=1,2,...N},xiFor the detection area corresponding to the ith pixel point in the image to be detected, Y is a real marked significant graph, and Y is { Y ═ Yi|i=1,2,...N},yi∈{0,1},yiA real marked salient region corresponding to the ith pixel point in the real marked salient image is marked, theta is a network parameter, β is a hyper-parameter, Lg(X, Y, θ) is the basic global penalty, Ls(X, Y, θ) is significant area loss; n is a radical of+Number of pixels of salient region in image, N-Is the number of pixels in the non-salient region in the image, F (-) is the whole network processing process, F (x)jθ) is the result of the significant value regression for each pixel, Ψ (-) is a Smooth-L1 function.
Optionally, the size of the primary saliency map is one eighth of the size of the image to be detected.
Optionally, the performing three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected specifically includes:
determining the size information of the image to be detected, wherein the size information comprises the width and the height of the image to be detected;
and (3) amplifying the primary saliency map to be completely consistent with the size information of the image to be detected according to a proportion by utilizing a cubic bilinear interpolation method to obtain a final saliency map.
In order to solve the technical problems, the invention also provides the following scheme:
a salient object region detection system, the detection system comprising:
the characteristic extraction unit is used for extracting characteristics from the image to be detected by adopting a multilayer convolution network to obtain a multilayer characteristic diagram;
the regression processing unit is used for carrying out significance value nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and the difference processing unit is used for carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
In order to solve the technical problems, the invention also provides the following scheme:
a salient object region detection system, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
In order to solve the technical problems, the invention also provides the following scheme:
a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
According to the embodiment of the invention, the invention discloses the following technical effects:
according to the method, the multi-level feature map is obtained by extracting the feature map through the multi-layer convolution network, the primary significant map is obtained by carrying out significant value nonlinear regression processing, and the final significant map matched with the image to be detected is obtained by carrying out three times of bilinear interpolation processing, so that end-to-end feature extraction is realized, and the efficiency of extracting the significant object region of the image can be improved.
Drawings
FIG. 1 is a flow chart of a salient object region detection method of the present invention;
fig. 2 is a schematic block diagram of the salient object region detection system of the present invention.
Description of the symbols:
the system comprises a characteristic extraction unit-1, a regression processing unit-2 and a difference value processing unit-3.
Detailed Description
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
The invention aims to provide a method for detecting a salient object region, which is characterized in that a multilayer convolution network is adopted to extract a feature map to obtain a multilayer feature map, then a primary salient map is obtained by carrying out non-linear regression processing on a salient value, and a final salient map matched with an image to be detected is obtained by carrying out three times of bilinear interpolation processing, so that end-to-end feature extraction is realized, and the efficiency of extracting the salient object region of the image can be improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the salient object region detection method of the present invention includes:
step 100: and extracting the features from the image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map.
Step 200: and carrying out significance value nonlinear regression processing on the multi-level feature map to obtain a primary significance map.
Step 300: and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
In step 100, the multi-layer convolutional network may be a multi-layer network pre-trained on a large-scale data set such as IMAGENET or COCO data set, for example, a visual geometry group VGG-16 or a residual error network RES-101.
In step 200, the performing a non-linear regression process on the significance values of the multi-level feature maps to obtain a primary significance map specifically includes:
and (3) carrying out non-linear regression processing on the significance values from the feature map to the pixel level on the multi-level feature map by utilizing the three-layer cascaded full convolution layer to obtain a primary significance map. In this embodiment, the size of the primary saliency map is one eighth of the size of the image to be detected.
Preferably, in order to reduce the over-fitting problem of the training process, a modified linear unit Relu and a discard layer Dropout are connected between the full convolutional layers of the three-layer cascade.
In order to fully utilize the pixels of the salient region of the image in the training process, the loss function L (X, Y, theta, β) of the nonlinear regression network of the salient values is adopted in the invention, and the training process of the network model can adopt a random gradient descent method to achieve the aim of minimizing the loss L (X, Y, theta, β)
Specifically, the primary saliency map is obtained according to the following formula:
Figure BDA0002290642650000061
Figure BDA0002290642650000062
Figure BDA0002290642650000063
wherein L (X, Y, θ, β) is a loss function of the significant nonlinear regression network, N is the number of pixels, X is the image to be detected, and X is { X ═i|i=1,2,...N},xiFor the detection area corresponding to the ith pixel point in the image to be detected, Y is a real marked significant graph, and Y is { Y ═ Yi|i=1,2,...N},yi∈{0,1},yiMarking the ith pixel point in the saliency map for the real annotationCorresponding true labeled salient region, theta is network parameter, β is hyper-parameter, Lg(X, Y, θ) is the basic global penalty, Ls(X, Y, θ) is significant area loss; n is a radical of+Number of pixels of salient region in image, N-Is the number of pixels in the non-salient region in the image, F (-) is the whole network processing process, F (x)jθ) is the result of the significant value regression for each pixel, Ψ (-) is a Smooth-L1 function.
In step 300, the three times bilinear interpolation processing is performed on the primary saliency map to obtain a final saliency map matched with the image to be detected, and the method specifically includes:
step 310: determining the size information of the image to be detected, wherein the size information comprises the width and the height of the image to be detected;
step 320: and (3) amplifying the primary saliency map to be completely consistent with the size information of the image to be detected according to a proportion by utilizing a cubic bilinear interpolation method to obtain a final saliency map.
Specifically, the obtaining a final saliency map by scaling up the primary saliency map to be completely consistent with the size information of the image to be detected by using a cubic bilinear interpolation method includes:
step 321: according to the topological background confidence coefficient of the primary saliency map, calculating the contrast of the primary saliency map and the to-be-detected image on the image color and spatial position features;
step 322: and carrying out three times of bilinear interpolation processing on the primary saliency map according to the contrast and the size information, and amplifying the primary saliency map to an ultimate saliency map completely consistent with the size information of the image to be detected.
The invention adopts an end-to-end simple process in the training and testing process, and realizes the beneficial effect of quickly and efficiently extracting the area of the remarkable object.
In addition, the invention also provides a salient object region detection system which can improve the efficiency of extracting the salient object region of the image. As shown in fig. 2, the salient object region detection system of the present invention includes a feature extraction unit 1, a regression processing unit 2, and a difference processing unit 3.
Specifically, the feature extraction unit 1 is configured to perform feature extraction from an image to be detected by using a multilayer convolutional network to obtain a multilayer feature map; the regression processing unit 2 is used for carrying out significance value nonlinear regression processing on the multi-level feature map to obtain a primary significance map; the difference processing unit 3 is configured to perform cubic bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
Further, the present invention also provides a salient object region detection system, including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
The present invention also provides a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform operations comprising:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
Compared with the prior art, the salient object region detection system and the computer-readable storage medium have the same excellent effects compared with the salient object region detection method, and are not repeated herein.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A salient object region detection method, comprising:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
2. The salient object region detection method according to claim 1, wherein the multilayer convolutional network is a visual geometry group VGG-16 or a residual error network RES-101.
3. The method for detecting a salient object region according to claim 1, wherein the performing a non-linear regression process on the saliency values of the multi-level feature map to obtain a primary saliency map specifically comprises:
and (3) carrying out non-linear regression processing on the significance values from the feature map to the pixel level on the multi-level feature map by utilizing the three-layer cascaded full convolution layer to obtain a primary significance map.
4. The salient object region detection method according to claim 3, wherein a correction linear unit and a discard layer are connected between all the convolution layers of the three-layer cascade.
5. The salient object region detection method according to claim 3, wherein the primary saliency map is obtained according to the following formula:
Figure FDA0002290642640000011
Figure FDA0002290642640000012
Figure FDA0002290642640000013
wherein L (X, Y, θ, β) is a loss function of the significant nonlinear regression network, N is the number of pixels, X is the image to be detected, and X is { X ═i|i=1,2,...N},xiFor the detection area corresponding to the ith pixel point in the image to be detected, Y is a real marked significant graph, and Y is { Y ═ Yi|i=1,2,...N},yi∈{0,1},yiA real marked salient region corresponding to the ith pixel point in the real marked salient image is marked, theta is a network parameter, β is a hyper-parameter, Lg(X, Y, θ) is the basic global penalty, Ls(X, Y, θ) is significant area loss; n is a radical of+Number of pixels of salient region in image, N-Is the number of pixels in the non-salient region in the image, F (-) is the whole network processing process, F (x)jθ) is the result of the significant value regression for each pixel, Ψ (-) is a Smooth-L1 function.
6. The salient object region detection method according to any one of claims 1 to 5, wherein the size of the primary saliency map is one eighth of the size of the image to be detected.
7. The method according to claim 1, wherein the three times bilinear interpolation processing is performed on the primary saliency map to obtain a final saliency map matched with the image to be detected, and specifically includes:
determining the size information of the image to be detected, wherein the size information comprises the width and the height of the image to be detected;
and (3) amplifying the primary saliency map to be completely consistent with the size information of the image to be detected according to a proportion by utilizing a cubic bilinear interpolation method to obtain a final saliency map.
8. A salient object region detection system, comprising:
the characteristic extraction unit is used for extracting characteristics from the image to be detected by adopting a multilayer convolution network to obtain a multilayer characteristic diagram;
the regression processing unit is used for carrying out significance value nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and the difference processing unit is used for carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
9. A salient object region detection system, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
10. A computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to:
extracting features from an image to be detected by adopting a multilayer convolution network to obtain a multilayer feature map;
carrying out significance nonlinear regression processing on the multi-level feature map to obtain a primary significance map;
and carrying out three times of bilinear interpolation processing on the primary saliency map to obtain a final saliency map matched with the image to be detected.
CN201911178510.7A 2019-11-27 2019-11-27 Method and system for detecting salient object area Pending CN110942095A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911178510.7A CN110942095A (en) 2019-11-27 2019-11-27 Method and system for detecting salient object area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911178510.7A CN110942095A (en) 2019-11-27 2019-11-27 Method and system for detecting salient object area

Publications (1)

Publication Number Publication Date
CN110942095A true CN110942095A (en) 2020-03-31

Family

ID=69908185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911178510.7A Pending CN110942095A (en) 2019-11-27 2019-11-27 Method and system for detecting salient object area

Country Status (1)

Country Link
CN (1) CN110942095A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)
CN106570498A (en) * 2016-10-12 2017-04-19 中国科学院自动化研究所 Salient region detection method and system
CN107346436A (en) * 2017-06-29 2017-11-14 北京以萨技术股份有限公司 A kind of vision significance detection method of fused images classification
WO2018019202A1 (en) * 2016-07-25 2018-02-01 武汉大学 Method and device for detecting change of structure of image
CN107886533A (en) * 2017-10-26 2018-04-06 深圳大学 Vision significance detection method, device, equipment and the storage medium of stereo-picture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018019202A1 (en) * 2016-07-25 2018-02-01 武汉大学 Method and device for detecting change of structure of image
CN106447658A (en) * 2016-09-26 2017-02-22 西北工业大学 Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network)
CN106570498A (en) * 2016-10-12 2017-04-19 中国科学院自动化研究所 Salient region detection method and system
CN107346436A (en) * 2017-06-29 2017-11-14 北京以萨技术股份有限公司 A kind of vision significance detection method of fused images classification
CN107886533A (en) * 2017-10-26 2018-04-06 深圳大学 Vision significance detection method, device, equipment and the storage medium of stereo-picture

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XUANYANG XI 等: "A Fast and Compact Saliency Score Regression Network Based on Fully Convolutional Network", 《ARXIV》 *
XUANYANG XI 等: "Salient object detection based on an efficient End-to-End Saliency Regression Network" *
XUANYANG XI 等: "Salient object detection based on an efficient End-to-End Saliency Regression Network", 《NEUROCOMPUTING》 *
方正等: "融合深度模型和传统模型的显著性检测", 《中国图象图形学报》 *

Similar Documents

Publication Publication Date Title
CN109784333B (en) Three-dimensional target detection method and system based on point cloud weighted channel characteristics
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN108256394B (en) Target tracking method based on contour gradient
US20210224609A1 (en) Method, system and device for multi-label object detection based on an object detection network
CN111310631B (en) Target tracking method and system for rotor operation flying robot
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
CN111260688A (en) Twin double-path target tracking method
CN110796048A (en) Ship target real-time detection method based on deep neural network
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN107220643A (en) The Traffic Sign Recognition System of deep learning model based on neurological network
CN109708658B (en) Visual odometer method based on convolutional neural network
CN110110618B (en) SAR target detection method based on PCA and global contrast
CN115797350B (en) Bridge disease detection method, device, computer equipment and storage medium
Ding et al. Improved object detection algorithm for drone-captured dataset based on yolov5
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN111539957A (en) Image sample generation method, system and detection method for target detection
CN111353396A (en) Concrete crack segmentation method based on SCSEOCUnet
CN104050674B (en) Salient region detection method and device
CN111914596B (en) Lane line detection method, device, system and storage medium
CN116740135A (en) Infrared dim target tracking method and device, electronic equipment and storage medium
CN108509826A (en) A kind of roads recognition method and its system of remote sensing image
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
Rho et al. Automated construction progress management using computer vision-based CNN model and BIM
CN108241869A (en) A kind of images steganalysis method based on quick deformable model and machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200331