CN114511776A - Method, device, medium and equipment for detecting remnant in visual area of camera - Google Patents

Method, device, medium and equipment for detecting remnant in visual area of camera Download PDF

Info

Publication number
CN114511776A
CN114511776A CN202111673218.XA CN202111673218A CN114511776A CN 114511776 A CN114511776 A CN 114511776A CN 202111673218 A CN202111673218 A CN 202111673218A CN 114511776 A CN114511776 A CN 114511776A
Authority
CN
China
Prior art keywords
grid
sub
frame image
data
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111673218.XA
Other languages
Chinese (zh)
Inventor
吴军
李家兴
韩朋朋
谭海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhongke Kaize Information Technology Co ltd
Original Assignee
Guangdong Zhongke Kaize Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhongke Kaize Information Technology Co ltd filed Critical Guangdong Zhongke Kaize Information Technology Co ltd
Priority to CN202111673218.XA priority Critical patent/CN114511776A/en
Publication of CN114511776A publication Critical patent/CN114511776A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting a remnant in a visual area of a camera, wherein the method comprises the following steps: setting a region of interest; acquiring a current frame image and a background frame image of an interested area; and detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result. The invention can reduce the input data amount and improve the efficiency of the residue detection algorithm by setting the background frame and the current frame by the region drawing method.

Description

Method, device, medium and equipment for detecting remnant in visual area of camera
Technical Field
The invention relates to the field of visual detection, in particular to a method, a device, a medium and equipment for detecting a remnant in a visual area of a camera.
Background
Regional monitoring is divided into inspection and early warning of people, things, behaviors, events and the like, and is mainly used for safety precaution and evidence tracing. The invention takes the change of image area as the core area carry-over detection.
The traditional detection method for the remnant generally adopts an image data difference method: the image data difference method is characterized in that simple subtraction is carried out on an image background frame and a current frame, a change value of an image is obtained as a judgment basis, and whether the image is a remnant or a target invades a visual area of a camera is output. The mode has poor interference resistance, is easily influenced by weather and light, and has low accuracy.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide a method, apparatus, medium and device for detecting a carry-over in the visual area of a camera, which are used to solve at least one of the shortcomings in the prior art.
To achieve the above and other related objects, the present invention provides a method for detecting a carry-over in a visual area of a camera, comprising:
setting a region of interest;
acquiring a current frame image and a background frame image of an interested area;
and detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result.
Optionally, the method further comprises:
gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
Optionally, before the detecting the mesh by using the carryover detection model, the method further includes:
fusing the grid block data to obtain fused data;
the carryover detection model takes the fused data as input.
Optionally, in the process of detecting the region of interest by using the carryover detection model, the grid set is traversed circularly to obtain a carryover detection result based on each grid data.
Optionally, based on each grid block data, detecting the region of interest by using a pre-trained remnant detection model, where each grid block data corresponds to one detection result.
Optionally, the method further comprises:
determining a submesh with carry-over;
the sub-grids with carry-over are combined to determine the size and location of the carry-over.
To achieve the above and other related objects, the present invention provides a device for detecting a carry-over in a visual area of a camera, comprising:
the setting module is used for setting the region of interest;
the image acquisition module is used for acquiring a current frame image and a background frame image of the region of interest;
and the image detection module is used for detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result.
Optionally, the method further comprises:
the gridding module is used for gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
To achieve the above and other related objects, the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the data visualization method according to any one of claims 1 to 7 when executing the computer program.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, performs the steps of the data visualization method.
As described above, the method, apparatus, medium, and device for detecting a carry-over in a visual area of a camera according to the present invention have the following advantages:
the invention discloses a method for detecting a remnant in a visual area of a camera, which comprises the following steps: setting a region of interest; acquiring a current frame image and a background frame image of an interested area; and detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result. The invention can reduce the input data amount and improve the efficiency of the residue detection algorithm by setting the background frame and the current frame by the region drawing method.
Drawings
FIG. 1 is a flow chart of a method for detecting a carry-over in a visual area of a camera according to one embodiment of the present invention;
FIG. 2 is a flow chart of a method for detecting carry-over in the visual area of a camera according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of gridding a region of interest according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a carryover detection model according to one embodiment of the present invention;
fig. 5 is a block diagram of an apparatus for detecting a carry-over in a visual area of a camera according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, an embodiment of the present application provides a method for detecting a carry-over in a visual area of a camera, including:
s100, setting an interested area;
s101, acquiring a current frame image and a background frame image of an interested area;
s102, detecting the region of interest by using a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result.
According to the invention, the interested region is selected from the whole monitoring region by a region drawing method, the input data volume can be reduced by setting the interested region, and the efficiency of the remnant detection algorithm is improved.
In one embodiment, the method further comprises:
gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data. As shown in fig. 3, the present application grids a region of interest, and divides the region of interest into a plurality of small squares. When the remnants are detected, each small square data is used as the input of an algorithm model, so that the effectiveness of the data is improved, and the accuracy of the algorithm is improved.
Specifically, when the carry-over detection is carried out, based on each grid block data, a pre-trained carry-over detection model is used for detecting the region of interest, and each grid block data corresponds to one detection result.
Fig. 4 is a block diagram of a carryover detection model, which is a convolutional neural network model.
The carry-over detection model is used for determining whether carry-over exists in any region according to the current frame image and the background frame image of the region, and the carry-over detection model can be obtained through training.
In the training process, a plurality of sample images are obtained, each sample image comprises a current frame image and a background frame image, and a detection result for judging whether a remnant exists in an area based on a previous frame image and the background frame image is obtained. Before training, each group of sample images may be subjected to grid division, for example, the sample images are divided into a plurality of small squares of 24 × 24, and the sample images are the current frame image and the background frame image of the same square. The current frame image is consistent with the background frame image, which shows that no object is left, otherwise, the object is left. Specifically, whether there is a legacy object may be represented by 1, 0, 1 corresponding to a legacy object, and 0 representing no legacy object. Objects in the sample image may be determined by manual annotation. And then, performing feature extraction on each sample image to obtain a feature vector of each sample image, describing the corresponding sample image through the feature vector, taking the feature vector of each sample image as input data, taking whether each sample image is the same as output data, and training according to the input data and the output data corresponding to the plurality of sample images respectively to obtain a carry-over detection model. For example, an initial remnant detection model is established, a plurality of sample images are traversed, training is performed according to input data and output data corresponding to the currently traversed sample images each time, a trained remnant detection model is obtained, the trained remnant detection model is adopted to detect the sample images, the remnant detection model can be corrected according to the difference between the test type and the remnant detection result to which the sample images actually belong, a corrected remnant detection model is obtained, and the like, and a high-accuracy remnant detection model can be trained after multiple traversals.
The detection results are classified through deep learning based on the convolutional neural network, whether 24x24 small squares at the same position of the current frame image and the background frame image are the same or not is judged, namely whether an object enters the current frame or not is judged; if the two objects are not the same, the object enters, otherwise, the object does not exist. In the application, the grids are arranged, the accurate size is set, and the accuracy of the algorithm is effectively improved.
In an embodiment, before the detecting the mesh by using the carryover detection model, the method further includes:
fusing the grid block data to obtain fused data;
the carryover detection model takes the fused data as input.
In this embodiment, since the grid block data includes the current frame image data and the background frame image data of the same grid, the grid block data is fused, that is, the current frame image data and the background frame image data of the same grid are fused to obtain the fused data.
Often, a color RGB image has three channels and can be viewed as a three channel data matrix. And fusing the three-channel data of the background frame and the three-channel data of the current frame by adopting a contact method to obtain 6 groups of data matrixes, and inputting the 6 groups of data matrixes as a model.
The background frame image b _ img is set and remains unchanged after the setting. Assume that the image to be measured is the current frame C _ img. And defining an area, setting a grid matrix, and forming grid data consisting of a background frame and a current frame. Then, each lattice block data (each lattice block data includes b _ img _ i _ j, C _ img _ i _ j) is traversed.
Input=concat(b_img_i_j,C_img_i_j)
In one embodiment, in the process of detecting the region of interest by using the carry-over detection model, the grid set is traversed circularly to obtain a carry-over detection result based on each grid block data.
Further, after a legacy detection result based on each grid block data is obtained, determining a sub-grid with a legacy;
the sub-grids with carry-over are combined to determine the size and location of the carry-over.
Since the region of interest is gridded in this embodiment, in the process of detecting the remnant, the grid group needs to be traversed circularly to obtain the remnant detection result of each grid block data. When there is one or a limited number (setting the accuracy of detecting whether the object is left) of '1', the loop is directly exited without the need of recycling traversal, and the output is that the object is left (or an intrusion event). When the size or position of the remnant is required to be detected, the grid block data is required to be continuously traversed, and the position information and the size of the remnant are calculated by integrating the areas through the obtained output of all the grid blocks. Specifically, when the size or position of the remnant needs to be detected, the grid block data needs to be continuously traversed, and the number n of 1 s is counted through the obtained outputs of all grid blocks. And according to the total number N of the grids, the size of the object occupies N/N of the area of the whole visual region. .
As shown in fig. 5, an embodiment of the present application provides a device for detecting a carry-over in a visual area of a camera, including:
a setting module 400 for setting a region of interest;
an image obtaining module 401, configured to obtain a current frame image and a background frame image of the region of interest;
and the image detection module 402 is configured to detect the region of interest by using a pre-trained residue detection model based on the current frame image and the background frame image, so as to obtain a residue detection result.
In one embodiment, the method further comprises:
the gridding module is used for gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
Gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
In one embodiment, the method further comprises: and the data fusion module is used for fusing the data of the grid block to obtain fused data before the grid is detected by utilizing the legacy detection model.
In one embodiment, in the process of detecting the region of interest by using the carry-over detection model, the grid set is traversed circularly to obtain a carry-over detection result based on each grid block data.
In an embodiment, the region of interest is detected by using a pre-trained residue detection model based on each grid block data, and each grid block data corresponds to a detection result.
In one embodiment, the method further comprises:
determining a sub-grid with carry-over;
the sub-grids with carry-over are combined to determine the size and location of the carry-over.
The above-mentioned device and the specific implementation of the detection method are substantially the same, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the detection method when executing the computer program.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is implemented to implement the steps of the detection method when being executed by a processor.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may comprise any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, etc.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method of detecting carry-over in a field of view of a camera, comprising:
setting a region of interest;
acquiring a current frame image and a background frame image of an interested area;
and detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result.
2. The method of claim 1, further comprising:
gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
3. The method of claim 2, further comprising, prior to detecting the mesh using the carryover detection model:
fusing the grid block data to obtain fused data;
the carryover detection model takes the fused data as input.
4. The method of claim 2, wherein the grid set is traversed in a loop during the detection of the region of interest using the artifact detection model to obtain an artifact detection result based on data of each grid set.
5. The method of claim 2, wherein the step of detecting the presence of the item left in the visual area of the camera,
and detecting the region of interest by utilizing a pre-trained remnant detection model based on each grid block data, wherein each grid block data corresponds to a detection result.
6. The method of claim 2, further comprising:
determining a sub-grid with carry-over;
the sub-grids with carry-over are combined to determine the size and location of the carry-over.
7. A device for detecting carry-over in the field of view of a camera, comprising:
the setting module is used for setting the region of interest;
the image acquisition module is used for acquiring a current frame image and a background frame image of the region of interest;
and the image detection module is used for detecting the region of interest by utilizing a pre-trained remnant detection model based on the current frame image and the background frame image to obtain a remnant detection result.
8. The device of claim 7, further comprising:
the gridding module is used for gridding the current frame image and the background frame image to obtain a first sub-grid of the current frame image and a second sub-grid of the background frame image; the first sub-grid and the second sub-grid form a grid group, the first sub-grid and the second sub-grid belonging to the same grid form a grid block, and grid data of the first sub-grid and grid data of the second sub-grid form grid block data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the detection method according to any of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the detection method according to any one of claims 1 to 6.
CN202111673218.XA 2021-12-31 2021-12-31 Method, device, medium and equipment for detecting remnant in visual area of camera Pending CN114511776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673218.XA CN114511776A (en) 2021-12-31 2021-12-31 Method, device, medium and equipment for detecting remnant in visual area of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673218.XA CN114511776A (en) 2021-12-31 2021-12-31 Method, device, medium and equipment for detecting remnant in visual area of camera

Publications (1)

Publication Number Publication Date
CN114511776A true CN114511776A (en) 2022-05-17

Family

ID=81548220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673218.XA Pending CN114511776A (en) 2021-12-31 2021-12-31 Method, device, medium and equipment for detecting remnant in visual area of camera

Country Status (1)

Country Link
CN (1) CN114511776A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063614A (en) * 2010-12-28 2011-05-18 天津市亚安科技电子有限公司 Method and device for detecting lost articles in security monitoring
CN106327488A (en) * 2016-08-19 2017-01-11 云赛智联股份有限公司 Adaptive foreground detection method and detection device
CN106408554A (en) * 2015-07-31 2017-02-15 富士通株式会社 Remnant detection apparatus, method and system
CN111564015A (en) * 2020-05-20 2020-08-21 中铁二院工程集团有限责任公司 Method and device for monitoring perimeter intrusion of rail transit
CN111723773A (en) * 2020-06-30 2020-09-29 创新奇智(合肥)科技有限公司 Remnant detection method, device, electronic equipment and readable storage medium
CN113240611A (en) * 2021-05-28 2021-08-10 中建材信息技术股份有限公司 Foreign matter detection method based on picture sequence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063614A (en) * 2010-12-28 2011-05-18 天津市亚安科技电子有限公司 Method and device for detecting lost articles in security monitoring
CN106408554A (en) * 2015-07-31 2017-02-15 富士通株式会社 Remnant detection apparatus, method and system
CN106327488A (en) * 2016-08-19 2017-01-11 云赛智联股份有限公司 Adaptive foreground detection method and detection device
CN111564015A (en) * 2020-05-20 2020-08-21 中铁二院工程集团有限责任公司 Method and device for monitoring perimeter intrusion of rail transit
CN111723773A (en) * 2020-06-30 2020-09-29 创新奇智(合肥)科技有限公司 Remnant detection method, device, electronic equipment and readable storage medium
CN113240611A (en) * 2021-05-28 2021-08-10 中建材信息技术股份有限公司 Foreign matter detection method based on picture sequence

Similar Documents

Publication Publication Date Title
CN108154105B (en) Underwater biological detection and identification method and device, server and terminal equipment
CN110751678A (en) Moving object detection method and device and electronic equipment
CN115294117B (en) Defect detection method and related device for LED lamp beads
CN111833340A (en) Image detection method, image detection device, electronic equipment and storage medium
CN110378254B (en) Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN115372877B (en) Lightning arrester leakage ammeter inspection method of transformer substation based on unmanned aerial vehicle
CN111598913A (en) Image segmentation method and system based on robot vision
CN109740609A (en) A kind of gauge detection method and device
CN111339902A (en) Liquid crystal display number identification method and device of digital display instrument
CN104754327A (en) Method for detecting and eliminating defective pixels of high spectral image
CN114428110A (en) Method and system for detecting defects of fluorescent magnetic powder inspection image of bearing ring
CN110378952A (en) A kind of image processing method and device
CN110163140A (en) Crowd density picture capturing method and device
CN111931721B (en) Method and device for detecting color and number of annual inspection label and electronic equipment
CN111161789B (en) Analysis method and device for key areas of model prediction
CN110321808B (en) Method, apparatus and storage medium for detecting carry-over and stolen object
CN114511776A (en) Method, device, medium and equipment for detecting remnant in visual area of camera
CN114494999B (en) Double-branch combined target intensive prediction method and system
CN114037993A (en) Substation pointer instrument reading method and device, storage medium and electronic equipment
CN114882020A (en) Method, device and equipment for detecting defects of product and computer readable medium
CN114708230A (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN113034420B (en) Industrial product surface defect segmentation method and system based on frequency space domain characteristics
CN113538411A (en) Insulator defect detection method and device
CN113935360B (en) Method, device and equipment for identifying auxiliary line in video and readable storage medium
KR20200125131A (en) Methdo and system for measuring image thickness based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination