CN112270330A - Intelligent detection method for concerned target based on Mask R-CNN neural network - Google Patents

Intelligent detection method for concerned target based on Mask R-CNN neural network Download PDF

Info

Publication number
CN112270330A
CN112270330A CN202011220944.1A CN202011220944A CN112270330A CN 112270330 A CN112270330 A CN 112270330A CN 202011220944 A CN202011220944 A CN 202011220944A CN 112270330 A CN112270330 A CN 112270330A
Authority
CN
China
Prior art keywords
target
mask
intelligent detection
target object
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011220944.1A
Other languages
Chinese (zh)
Inventor
白万荣
张驯
朱小琴
王蓉
刘吉祥
孙阳
孙启娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Gansu Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Gansu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Gansu Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Gansu Electric Power Co Ltd
Priority to CN202011220944.1A priority Critical patent/CN112270330A/en
Publication of CN112270330A publication Critical patent/CN112270330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent detection method of an attention target based on a Mask R-CNN neural network, which comprises the following steps: establishing an image data set and training a Mask R-CNN network by using ImageNet; collecting a standard comparison sample image of a class target to be detected and establishing a Hu moment matching database; detecting a target object on the test image by using the trained Mask R-CNN network, and outputting a processed contour binary image of the target object; performing contour feature matching based on the target object and the contour of the standard comparison graph, judging whether the target object is shielded or not, and outputting and judging whether the target object is not shielded; and calculating the target with the largest volume in the unoccluded object and outputting the target as final output to finish the intelligent detection of the target of interest. The invention can realize the intelligent detection of the concerned target based on the image; the complexity is low, and the running speed is high; the application range is wide, the method can be applied to different fields, and the mobility is strong.

Description

Intelligent detection method for concerned target based on Mask R-CNN neural network
Technical Field
The invention relates to the technical field of computer vision, in particular to an intelligent detection method for an attention target based on a Mask R-CNN neural network.
Background
The image-based target detection is a basic problem in the field of computer vision research, and mainly aims to identify and position a plurality of objects in an image, at present, the object detection is widely applied to the engineering fields of safety defense, intelligent traffic and the like, and the object detection methods are mainly divided into two types: the method has the advantages that firstly, the traditional machine learning method is utilized, and secondly, the deep learning method is utilized, and although the traditional machine learning method can obtain a good detection effect, the traditional machine learning method has obvious defects of high calculation complexity, poor robustness and the like; in recent two years, deep learning methods break through in the field of computer vision continuously, a plurality of representative object detection algorithms emerge, compared with the traditional method, the method has higher accuracy, and the deep learning object detection method is divided into an end-to-end method and a region selection method. The end-to-end method is that the categories and position frames of a plurality of objects are directly regressed according to a given image; the region selection method is characterized in that not only a target object on an image can be detected, but also the outline of the target object can be detected, so that the Mask R-CNN algorithm is widely applied to the detection field.
The current target detection algorithm can directly detect the types of the specified targets in various states in the image, but cannot distinguish the significance of the targets, namely cannot confirm the actual attention targets in the image. However, in some applications, it is desirable to further classify and discriminate the type of the target to be detected, and automatically identify the target to be observed when the image is captured, for example, when a substation machine is in inspection, a plurality of devices of the same type are often captured in the image captured by the machine, but the definition and the detection accuracy are limited, and only fault judgment or intelligent reading is required to be performed on the current target to be observed. In general, the target of interest is the largest occupied image area of the whole objects which are not occluded, and therefore, the invention focuses on realizing the target of interest detection in the image.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the technical problem solved by the invention is as follows: the traditional scheme has high calculation complexity, poor robustness and low accuracy.
In order to solve the technical problems, the invention provides the following technical scheme: establishing an image data set and training a Mask R-CNN network by using ImageNet; collecting a standard comparison sample image of a class target to be detected and establishing a Hu moment matching database; detecting a target object on the test image by using the trained Mask R-CNN network, and outputting a processed contour binary image of the target object; performing contour feature matching based on the target object and the contour of the standard comparison graph, judging whether the target object is shielded or not, and outputting and judging whether the target object is not shielded; and calculating the target with the largest volume in the unoccluded object and outputting the target as final output to finish the intelligent detection of the target of interest.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the principle of the contour matching comprises matching according to the Hu moment characteristics of two objects, the characteristic matching calculation formula comprises,
Figure BDA0002762007650000021
wherein the content of the first and second substances,
Figure BDA0002762007650000022
Figure BDA0002762007650000023
respectively, the Hu moments of A, B.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the Hu moment includes the sum of the values of,
h1=η2002
Figure BDA0002762007650000024
h3=(η30-3η12)2+(3η2103)2
h4=(η3012)2+(η2103)2
h5=(η30-3η12)(η3012)[(η3012)2-3(η2103)2]+(3η2103)(η2103)[3(3η2103)2-(η2103)2]
h6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103)
h7=(3η2103)(η2103)[3(η3022)2-(η2103)2]-(η3012)(η2103)[3(η3012)2-(η2103)2]
as a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the standard comparison sample map includes a comparison of,
a front view, a back view, a left side view, a right side view, a front 45 ° oblique view, and a back 45 ° oblique view of the object.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: and separating the foreground from the background of the data sample graph by using the Mask R-CNN network, wherein the color of a foreground object is set to be black, and the color of the background is set to be white, namely, the foreground object is converted into a standard contrast contour graph for storage.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the processed target information includes detection frame position and contour information.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: and judging whether the target object is shielded or not, wherein the standard for judging whether the target object is shielded or not comprises the matching degree of the object.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the criterion for judging whether the target object is blocked further comprises that when the matching degree of the target object contour and any one of the six standard contours is greater than a set threshold value, the target object is closest to the contour in the direction and has higher integrity, the object is not blocked, and a label of the target object is output, otherwise, the target object is blocked.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the criterion for judging the maximum target comprises the number of pixels contained in each target.
As a preferred scheme of the intelligent detection method for the concerned target based on the Mask R-CNN neural network, the method comprises the following steps: the Hu moment image features include rotation, scaling and translation invariance, which are linear combinations of normalized moments.
The invention has the beneficial effects that: intelligent detection of the target of interest based on the image can be realized; the complexity is low, and the running speed is high; the application range is wide, the method can be applied to different fields, and the mobility is strong.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a basic flow diagram of an intelligent detection method for an object of interest based on a Mask R-CNN neural network according to an embodiment of the present invention;
fig. 2 is a standard contrast diagram acquired by a camera of the intelligent detection method for an object of interest based on a Mask R-CNN neural network according to an embodiment of the present invention;
fig. 3 is a processed standard contour binary image of an intelligent detection method for an object of interest based on a Mask R-CNN neural network according to an embodiment of the present invention;
FIG. 4 is a diagram of a visualization result of Mask R-CNN detection based on an intelligent detection method for a target of interest of a Mask R-CNN neural network according to an embodiment of the present invention;
fig. 5 is an extracted object contour map of an intelligent detection method for an attention target based on a Mask R-CNN neural network according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
The invention provides a method for detecting an attention target based on Mask R-CNN and a contour matching algorithm, which can detect the most significant target object on an image, meet the further detection requirement after the object is detected and realize the characteristic function in the detection aspect.
Referring to fig. 1 to 5, an embodiment of the present invention provides an intelligent detection method for a target of interest based on a Mask R-CNN neural network, including:
s1: an image dataset was created and a Mask R-CNN network was trained using ImageNet.
Specifically, a camera is used for collecting images, a proper data set is established, a Mask R-CNN network pre-trained by ImageNet is trained by the training data set, optimal network parameters aiming at corresponding data types (for example, vehicles need to be detected, namely a vehicle database) are obtained, and a network model is established.
S2: collecting a standard comparison sample image of a class target to be detected and establishing a Hu moment matching database;
it should be noted that: the standard comparative sample plot includes:
front, back, left side, right side, front 45 ° oblique view and back 45 ° oblique view of the object.
Specifically, a Hu moment matching database is established: the method comprises the steps that a camera is used for collecting standard contrast sample images of a to-be-detected class target, each sample collects six non-blocking images in different directions, namely a front view, a rear view, a left side view, a right side view, a front oblique 45-degree image and a rear oblique 45-degree image of the target, as shown in fig. 2 and fig. 3, a Mask R-CNN network is used for separating the foreground and the background of the data sample, as shown in fig. 4, the color of a foreground object is set to be black, and the background is set to be white, namely, the data sample is converted into a standard contrast profile image to be stored, as shown in fig. 5 (taking a car in the figure as an.
S3: detecting a target object on a test image by using the trained Mask R-CNN network, and outputting a processed contour binary image of the target object;
it should be noted that: the method for realizing the foreground and background separation of the data sample graph by using the Mask R-CNN network comprises the following steps:
the color of the foreground object is set to be black, and the background is set to be white, namely, the foreground object is converted into a standard contrast contour map for storage.
The processed target information comprises the position and the outline information of the detection frame.
Specifically, a test image is input into a Mask R-CNN network for target detection, the position and contour information of a detection frame of each target object are output, a binary image only containing the contour of the object is drawn for each target object, wherein the contour of the target object and the interior of the target object are set to be black, the rest background parts are white, and the detected target is subjected to scale conversion to be equal to the size of a standard contour map of the corresponding category of the target object, so that the subsequent contour matching step is conveniently carried out.
S4: performing contour feature matching based on the target object and the contour of the standard comparison graph, judging whether the target object is shielded or not, and outputting and judging an object which is not shielded;
it should be noted that the principle of contour matching includes matching based on the Hu moment features of two objects, and the feature matching calculation formula includes,
Figure BDA0002762007650000061
wherein the content of the first and second substances,
Figure BDA0002762007650000062
Figure BDA0002762007650000063
respectively, the Hu moments of A, B.
Wherein the Hu moment comprises, by weight,
h1=η2002
Figure BDA0002762007650000064
h3=(η30-3η12)2+(3η2103)2
h4=(η3012)2+(η2103)2
h5=(η30-3η12)(η3012)[(η3012)2-3(η2103)2]+(3η2103)(η2103)[3(3η2103)2-(η2103)2]
h6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103)
h7=(3η2103)(η2103)[3(η3022)2-(η2103)2]-(η3012)(η2103)[3(η3012)2-(η2103)2]
further, the Hu moment image features include rotation, scaling and translation invariance, which are linear combinations of normalized moments.
The criterion for judging whether the target object is shielded comprises the matching degree of the object.
The criterion for determining whether the target object is occluded further comprises,
when the matching degree of the target object contour and any one of the six standard contours is larger than a set threshold value, the target object is closest to the contour in the direction and has high integrity, the object is not shielded, the label of the target object is output, and otherwise, the target object is shielded.
Specifically, the two-value contour map of each target object is respectively subjected to feature matching based on the Hu moment with six corresponding standard contour maps, the calculated result is the matching degree of the two objects, the higher the matching degree is, the better the matching effect is, the closer the two objects are, when the matching degree of the target object contour and any one of the six standard contours is greater than a set threshold value, the target object is closest to the contour in the direction and the integrity is higher, the object is not shielded, the label of the target object is output, and otherwise, the target object is shielded.
S5: and calculating the target with the largest volume in the unoccluded object and outputting the target as final output to finish the intelligent detection of the target concerned.
The criterion for determining the largest object includes the number of pixels contained in each object.
Specifically, in the targets with the Hu moments meeting the matching conditions, the target with the largest number of pixels is selected as the final result to be output according to the number of pixels contained in each target.
Example 2
In order to verify and explain the technical effects adopted in the method, the embodiment compares the traditional technical scheme with the method of the invention to verify the real effects of the method.
The traditional technical scheme is as follows: the method comprises the steps of utilizing traditional image feature matching to achieve target detection, and then adopting a contour moment or a normalized moment to achieve non-shielded target detection, wherein a mean-shift algorithm is adopted in a target detection method based on the traditional image feature matching; the contour moment is a value obtained by integrating all points on the contour, and the value is used as a rough feature of the contour, and the formula is as follows:
Figure BDA0002762007650000071
however, when two objects with the same contour but different sizes are matched, an accurate matching value cannot be output by using the simple contour moment, the size of the object can be ignored by the normalized moment, only the contour of the object is considered, and the calculation formula is as follows:
Figure BDA0002762007650000072
wherein x isavg=m10/m00,yavg=m10/m00(ii) a But the normalized moment depends on the selected coordinate system, and thus, when the object is rotated and the like, the accuracy of the matching result is affected.
The method realizes target detection by using a Mask-RCNN network of deep learning, and then realizes the detection of the non-shielding target by extracting the target characteristics through Hu moment and matching; the experimental data for the comparison of the process of the invention with the conventional protocol are shown in the following table:
Figure BDA0002762007650000081
compared with the traditional method, the target detection realized by using the Mask-RCNN network can reach 95% of correct detection rate which is far higher than 83% of correct detection rate by adopting a meanshift algorithm; on the basis of target detection, the Hu moment is used for carrying out feature matching on the Mask-RCNN network detection result, so that the final correct detection rate of an unshielded target can be achieved by 87%, and the correct recognition rate of only 54% can be achieved by adopting the traditional algorithm of meanshift + contour moment.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (10)

1. An intelligent detection method for a concerned target based on a Mask R-CNN neural network is characterized by comprising the following steps:
establishing an image data set and training a Mask R-CNN network by using ImageNet;
collecting a standard comparison sample image of a class target to be detected and establishing a Hu moment matching database;
detecting a target object on the test image by using the trained Mask R-CNN network, and outputting a processed contour binary image of the target object;
performing contour feature matching based on the target object and the contour of the standard comparison graph, judging whether the target object is shielded or not, and outputting and judging whether the target object is not shielded;
and calculating the target with the largest volume in the unoccluded object and outputting the target as final output to finish the intelligent detection of the target of interest.
2. The intelligent detection method of the concerned target based on the Mask R-CNN neural network as claimed in claim 1, characterized in that: the principle of the contour matching comprises matching according to the Hu moment characteristics of two objects, the characteristic matching calculation formula comprises,
Figure FDA0002762007640000011
wherein the content of the first and second substances,
Figure FDA0002762007640000012
Figure FDA0002762007640000013
respectively, the Hu moments of A, B.
3. The intelligent detection method of the target of interest based on the Mask R-CNN neural network as claimed in claim 1 or 2, characterized in that: the Hu moment includes the sum of the values of,
h1=η2002
Figure FDA0002762007640000014
h3=(η30-3η12)2+(3η2103)2
h4=(η3012)2+(η2103)2
h5=(η30-3η12)(η3012)[(η3012)2-3(η2103)2]+(3η2103)(η2103)[3(3η2103)2-(η2103)2]
h6=(η2002)[(η3012)2-(η2103)2]+4η113012)(η2103)
h7=(3η2103)(η2103)[3(η3022)2-(η2103)2]-(η3012)(η2103)[3(η3012)2-(η2103)2]
4. the intelligent detection method of the concerned target based on the Mask R-CNN neural network as claimed in claim 1, characterized in that: the standard comparison sample map includes a comparison of,
a front view, a back view, a left side view, a right side view, a front 45 ° oblique view, and a back 45 ° oblique view of the object.
5. The intelligent detection method of the target of interest based on the Mask R-CNN neural network as claimed in claim 1 or 4, characterized in that: the foreground and background separation of the data sample graph is realized by utilizing the Mask R-CNN network, including,
the color of the foreground object is set to be black, and the background is set to be white, namely, the foreground object is converted into a standard contrast contour map for storage.
6. The intelligent detection method of the target of interest based on the Mask R-CNN neural network as claimed in claim 5, characterized in that: the processed target information includes detection frame position and contour information.
7. The intelligent detection method of the concerned target based on the Mask R-CNN neural network as claimed in claim 6, characterized in that: and judging whether the target object is shielded or not, wherein the standard for judging whether the target object is shielded or not comprises the matching degree of the object.
8. The intelligent detection method of the target of interest based on the Mask R-CNN neural network as claimed in claim 7, wherein: the criterion for determining whether the target object is occluded further comprises,
when the matching degree of the target object contour and any one of the six standard contours is larger than a set threshold value, the target object is closest to the contour in the direction and has high integrity, the object is not shielded, the label of the target object is output, and otherwise, the target object is shielded.
9. The intelligent detection method of the target of interest based on the Mask R-CNN neural network as claimed in claim 1 or 7, characterized in that: the criterion for judging the maximum target comprises the number of pixels contained in each target.
10. The intelligent detection method of the concerned target based on the Mask R-CNN neural network as claimed in claim 3, characterized in that: the Hu moment image features include rotation, scaling and translation invariance, which are linear combinations of normalized moments.
CN202011220944.1A 2020-11-05 2020-11-05 Intelligent detection method for concerned target based on Mask R-CNN neural network Pending CN112270330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011220944.1A CN112270330A (en) 2020-11-05 2020-11-05 Intelligent detection method for concerned target based on Mask R-CNN neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011220944.1A CN112270330A (en) 2020-11-05 2020-11-05 Intelligent detection method for concerned target based on Mask R-CNN neural network

Publications (1)

Publication Number Publication Date
CN112270330A true CN112270330A (en) 2021-01-26

Family

ID=74346117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011220944.1A Pending CN112270330A (en) 2020-11-05 2020-11-05 Intelligent detection method for concerned target based on Mask R-CNN neural network

Country Status (1)

Country Link
CN (1) CN112270330A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784771A (en) * 2021-01-27 2021-05-11 浙江芯昇电子技术有限公司 Human shape detection method, system and monitoring equipment
CN113283322A (en) * 2021-05-14 2021-08-20 柳城牧原农牧有限公司 Livestock trauma detection method, device, equipment and storage medium
CN115641455A (en) * 2022-09-16 2023-01-24 杭州视图智航科技有限公司 Image matching method based on multi-feature fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903282A (en) * 2014-04-08 2014-07-02 陕西科技大学 Target tracking method based on LabVIEW
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision
CN108898079A (en) * 2018-06-15 2018-11-27 上海小蚁科技有限公司 A kind of monitoring method and device, storage medium, camera terminal
CN109740665A (en) * 2018-12-29 2019-05-10 珠海大横琴科技发展有限公司 Shielded image ship object detection method and system based on expertise constraint
CN110084146A (en) * 2019-04-08 2019-08-02 清华大学 Based on the pedestrian detection method and device for blocking perception self-supervisory study
CN111754441A (en) * 2020-06-29 2020-10-09 国网甘肃省电力公司电力科学研究院 Passive detection method for image copy-paste forgery

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903282A (en) * 2014-04-08 2014-07-02 陕西科技大学 Target tracking method based on LabVIEW
CN107766855A (en) * 2017-10-25 2018-03-06 南京阿凡达机器人科技有限公司 Chess piece localization method, system, storage medium and robot based on machine vision
CN108898079A (en) * 2018-06-15 2018-11-27 上海小蚁科技有限公司 A kind of monitoring method and device, storage medium, camera terminal
CN109740665A (en) * 2018-12-29 2019-05-10 珠海大横琴科技发展有限公司 Shielded image ship object detection method and system based on expertise constraint
CN110084146A (en) * 2019-04-08 2019-08-02 清华大学 Based on the pedestrian detection method and device for blocking perception self-supervisory study
CN111754441A (en) * 2020-06-29 2020-10-09 国网甘肃省电力公司电力科学研究院 Passive detection method for image copy-paste forgery

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谢金路: "基于图像视觉的分拣机器人运动控制", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 36 - 37 *
魏中雨 等: "基于机器视觉和深度神经网络的零件装配检测", 《组合机床与自动化加工技术》, pages 74 - 77 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784771A (en) * 2021-01-27 2021-05-11 浙江芯昇电子技术有限公司 Human shape detection method, system and monitoring equipment
CN113283322A (en) * 2021-05-14 2021-08-20 柳城牧原农牧有限公司 Livestock trauma detection method, device, equipment and storage medium
CN115641455A (en) * 2022-09-16 2023-01-24 杭州视图智航科技有限公司 Image matching method based on multi-feature fusion
CN115641455B (en) * 2022-09-16 2024-01-09 杭州视图智航科技有限公司 Image matching method based on multi-feature fusion

Similar Documents

Publication Publication Date Title
CN108710865B (en) Driver abnormal behavior detection method based on neural network
CN112270330A (en) Intelligent detection method for concerned target based on Mask R-CNN neural network
CN111611905B (en) Visible light and infrared fused target identification method
CN108256456B (en) Finger vein identification method based on multi-feature threshold fusion
WO2019227954A1 (en) Method and apparatus for identifying traffic light signal, and readable medium and electronic device
WO2018145470A1 (en) Image detection method and device
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN107066933A (en) A kind of road sign recognition methods and system
CN111738314A (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN106682601A (en) Driver violation conversation detection method based on multidimensional information characteristic fusion
CN111445459A (en) Image defect detection method and system based on depth twin network
CN106570439B (en) Vehicle detection method and device
CN111241975A (en) Face recognition detection method and system based on mobile terminal edge calculation
CN106951869A (en) A kind of live body verification method and equipment
CN109460787A (en) IDS Framework method for building up, device and data processing equipment
CN110675442B (en) Local stereo matching method and system combined with target recognition technology
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
CN110503049B (en) Satellite video vehicle number estimation method based on generation countermeasure network
CN115661757A (en) Automatic detection method for pantograph arcing
CN111461076A (en) Smoke detection method and smoke detection system combining frame difference method and neural network
CN111160107A (en) Dynamic region detection method based on feature matching
CN114332655A (en) Vehicle self-adaptive fusion detection method and system
CN108932471B (en) Vehicle detection method
CN116704490B (en) License plate recognition method, license plate recognition device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination