CN111681208A - Neglected loading part detection method and device, computer equipment and storage medium - Google Patents

Neglected loading part detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111681208A
CN111681208A CN202010390119.XA CN202010390119A CN111681208A CN 111681208 A CN111681208 A CN 111681208A CN 202010390119 A CN202010390119 A CN 202010390119A CN 111681208 A CN111681208 A CN 111681208A
Authority
CN
China
Prior art keywords
target part
target
video
tracking
tracking track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010390119.XA
Other languages
Chinese (zh)
Other versions
CN111681208B (en
Inventor
白家男
陈庆
章合群
周祥明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010390119.XA priority Critical patent/CN111681208B/en
Publication of CN111681208A publication Critical patent/CN111681208A/en
Application granted granted Critical
Publication of CN111681208B publication Critical patent/CN111681208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a neglected loading part detection method, a device, computer equipment and a storage medium, wherein the neglected loading part detection method comprises the following steps: acquiring a video of a part packaging process; detecting a target part from a video frame of a video; tracking the track of the target part in a video frame of a video to obtain tracking track information; judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information; and determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value. Through the method and the device, the problems that the part neglected loading detection accuracy rate is low and the detection part type is limited in the related technology are solved, whether the part neglected loading is judged efficiently in real time is achieved, the part neglected loading detection accuracy rate is high, and the part type detection applicability is wide.

Description

Neglected loading part detection method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of computer vision, in particular to a neglected loading part detection method, a neglected loading part detection device, computer equipment and a storage medium.
Background
In the process of manufacturing televisions in a television production factory, the last step is to put the matched parts into a packaging box for packaging. It is often the case that a part is missing from the package due to the carelessness of the production line worker. If the television with neglected parts is sold to the hands of customers, complaints and even return goods of the customers can be caused, and the sales performance of the television manufacturer can be seriously influenced. Therefore, it is necessary to determine whether the parts are missing in the packing box during the packing process. Common television parts include front liner, side liner, instructions and bottom brackets. The neglected loading of the front liner plate and the side liner plates can cause abrasion of the television in the transportation process, and the neglected loading of the specifications and the bottom support can cause the television to be incapable of being used normally.
Chinese patent detection method and device for neglected loading of accessories in packing box (publication number: CN106324685A) disclose that position information of installation position of accessories in the packing box is obtained; placing the capacitive detection device at a position outside the packaging box corresponding to the position of the installation position of the accessory in the packaging box; acquiring capacitance change information of a capacitive sensor in the capacitive detection device before and after the capacitive detection device is placed; and judging whether the accessories are neglected in the packaging box according to the change information of the capacitance of the capacitive sensor in the capacitive detection device.
However, the capacitive detection device needs to be placed outside the packaging box at a position corresponding to the position of the installation position of the accessory in the packaging box, and only conductive parts can be detected, so that the accuracy rate is difficult to guarantee.
At present, no effective solution is provided aiming at the problems of additional inspection steps, low packaging efficiency, low detection accuracy and limitation of part types after packaging in the related technology.
Disclosure of Invention
The embodiment of the application provides a neglected loading part detection method and device, computer equipment and a storage medium, and aims to at least solve the problems that in the related technology, the part neglected loading detection accuracy is low and the part type is limited.
In a first aspect, an embodiment of the present application provides a method for detecting a neglected loading part, including:
acquiring a video of a part packaging process;
detecting a target part from a video frame of the video;
tracking the track of the target part in the video frame of the video to obtain tracking track information;
judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information;
and determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value.
In some of these embodiments, after obtaining the video of the part packaging process, the method further comprises:
and determining that the target part is neglected to be installed under the condition that the target part is not detected from the video frame of the video.
In some of these embodiments, detecting a target part from a video frame of the video comprises:
detecting suspected target parts and the types thereof through a target detection algorithm;
and verifying the category of the suspected target part through a classification algorithm, and determining the suspected target part as the target part under the condition that the verification is passed.
In some of these embodiments, detecting a target part from a video frame of the video comprises:
detecting suspected target parts and the categories thereof through a target detection algorithm, and acquiring a first confidence score generated by the target detection algorithm;
verifying the category of the suspected target part through a classification algorithm, and acquiring a second confidence score generated by the classification algorithm;
determining a joint confidence of the suspected target part according to the first confidence score and the second confidence score when the verification of the category of the suspected target part passes;
and determining the suspected target part as the target part under the condition that the joint confidence is greater than a preset confidence threshold.
In some embodiments, tracking the trajectory of the target part in the video frames of the video, and obtaining tracking trajectory information includes:
tracking the target part by using a multi-target tracking algorithm to obtain a tracking track; wherein the number of the target parts is multiple;
matching the tracking track with the existing tracking track; wherein the existing tracking track is the tracking track of the target part in the previous frame of video frame;
and under the condition that the tracking track is successfully matched with the existing tracking track, updating the tracking track into the tracking track information of the target part.
In some of these embodiments, the method further comprises:
and under the condition that the target part is not matched with the existing tracking track and/or the existing tracking track does not exist, judging that the target part is a new target part, and establishing the tracking track as the tracking track information of the target part.
In some embodiments, determining whether the time length of the target part appearing in the video is greater than a preset threshold according to the tracking track information includes:
acquiring the part type of the target part from the tracking track information;
judging whether the target part is a set target part or not according to the part type;
and under the condition that the target part is a set target part, accumulating the time lengths corresponding to all tracking tracks of the target part according to the part type.
In some embodiments, before determining whether the duration of the target part appearing in the video is greater than a preset threshold according to the tracking track information, the method further includes:
detecting a package from the video frame;
judging whether the packaging box is in a preset area in the video frame;
and under the condition that the packaging box is located in a preset area in the video frame, judging whether the time length of the target part appearing in the video is greater than the preset threshold value or not according to the tracking track information.
In some embodiments, after determining whether the packing box is located in the preset area in the video frame, the method further comprises: and under the condition that the packaging box is not in a preset area in the video frame, determining the target parts which are not neglected to be packaged and/or reporting the neglected to be packaged.
In a second aspect, an embodiment of the present application provides a neglected loading part detection apparatus, including:
the acquisition module is used for acquiring a video of a part packaging process;
the target detection module is used for detecting a target part from a video frame of the video;
the multi-target tracking module is used for tracking the track of the target part in the video frame of the video to obtain tracking track information;
the logic judgment module is used for judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information;
and the processing module is used for determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value.
In some of these embodiments, the apparatus further comprises:
the first processing module is used for determining that the target part is not installed under the condition that the target part is not detected in the video frame of the video.
In some of these embodiments, the object detection module comprises:
the first detection unit is used for detecting suspected target parts and the types of the suspected target parts through a target detection algorithm;
the first verification unit is used for verifying the category of the suspected target part through a classification algorithm and determining the suspected target part as the target part under the condition that the verification is passed.
In some of these embodiments, the object detection module further comprises:
the second detection unit is used for detecting suspected target parts and the types thereof through a target detection algorithm and acquiring a first confidence score generated by the target detection algorithm;
the second verification unit is used for verifying the category of the suspected target part through a classification algorithm and acquiring a second confidence score generated by the classification algorithm;
the first calculation unit is used for determining the joint confidence of the suspected target part according to the first confidence score and the second confidence score when the verification of the category of the suspected target part is passed;
a first determining unit, configured to determine that the suspected target part is the target part when the joint confidence is greater than a preset confidence threshold.
In some of these embodiments, the multi-target tracking module comprises:
the first tracking unit is used for tracking the target part by utilizing a multi-target tracking algorithm to obtain a tracking track; wherein the number of the target parts is multiple;
the first matching unit is used for matching the tracking track with the existing tracking track; wherein the existing tracking track is the tracking track of the target part in the previous frame of video frame;
and the first updating unit is used for updating the tracking track into the tracking track information of the target part under the condition that the tracking track is successfully matched with the existing tracking track.
In some of these embodiments, the apparatus further comprises:
and the first creating unit is used for judging that the target part is a new target part and creating the tracking track as the tracking track information of the target part under the condition that the target part is not matched with the existing tracking track and/or the existing tracking track does not exist.
In some embodiments, the logic determining module comprises:
the first acquisition unit is used for acquiring the part type of the target part from the tracking track information;
the first judgment unit is used for judging whether the target part is a set target part or not according to the part type;
and the second calculating unit is used for accumulating the time lengths corresponding to all the tracking tracks of the target part according to the part type under the condition that the target part is a set target part.
In some embodiments, the logic determining module further comprises:
a third detection unit for detecting a packing box from the video frame;
and the second judging unit is used for judging whether the packing box is in a preset area in the video frame.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the method for detecting a missing part according to the first aspect is implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for detecting a missing part is implemented as described in the first aspect.
Compared with the related art, the neglected loading part detection method, the neglected loading part detection device, the computer equipment and the storage medium provided by the embodiment of the application acquire the video of the part packaging process; then, detecting a target part from a video frame of the video; tracking the track of the target part in a video frame of a video to obtain tracking track information; finally, tracking track information, and judging whether the time length of the target part appearing in the video is greater than a preset threshold value; under the condition that the time length of the target part appearing in the video is not more than a preset threshold value when the target part is judged, the target part is determined to be neglected for installation, the problems that the part neglected for installation detection accuracy rate is low and the type of the detected part is limited in the related technology are solved, whether the part is neglected for installation is efficiently judged in real time, the part neglected for installation detection accuracy rate is high, and the applicability of the type of the detected part is wide.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a missing part detection method according to an embodiment of the present application;
FIG. 2 is a flow diagram of detecting a target part from a video frame of a video according to an embodiment of the present application;
FIG. 3 is a flow chart of determining a part missing according to a tracking trajectory according to an embodiment of the present application;
FIG. 4 is an overall flow diagram of a missing load detection according to an embodiment of the present application;
FIG. 5 is a block diagram of a missing part detection apparatus according to an embodiment of the present application;
fig. 6 is an internal configuration diagram of the computer device of the present embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The various techniques described in this application may be used in various target detection, target classification, and visual target tracking systems and apparatuses.
The embodiment provides a neglected loading part detection method. FIG. 1 is a flow chart of a missing part detection method according to an embodiment of the present application; fig. 4 is an overall flowchart of the missing-package detection according to the embodiment of the present application, such as fig. 1 and 4, the flowchart includes the following steps:
and step S101, acquiring a video of a part packaging process.
In this embodiment, the video is collected by installing an intelligent camera at the top of the head of a worker who performs a production line packaging operation in a factory, the intelligent camera can capture a real-time picture of the worker in the operation process, and the captured real-time picture is decoded by a video encoding and decoding technology to obtain a video.
Step S102, detecting a target part from a video frame of a video.
In this embodiment, the target part is detected by a target detection algorithm, and then the detected target part is sent to a target classification network for judgment and verification, so as to ensure that the detected target part is a correct target.
And S103, tracking the track of the target part in the video frame of the video to obtain tracking track information.
In this embodiment, a multi-target tracking algorithm is adopted to track the track of the target part, so as to obtain tracking track information.
And step S104, judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information.
And step S105, determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value.
Through the steps from S101 to S105, the video of the part packaging process is acquired; detecting a target part from a video frame of a video; tracking the track of the target part in a video frame of a video to obtain tracking track information; tracking track information, and judging whether the time length of the target part appearing in the video is greater than a preset threshold value; determining that the target part is neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value; the method solves the problems that in the related art, the method for detecting the neglected loading of the part needs to additionally add a checking step after the part is packaged, and the packaging efficiency of a factory is reduced, and simultaneously solves the problems that the accuracy of the part neglected loading detection method based on capacitance detection is difficult to ensure, and the type of the part capable of being detected is limited; whether the parts are neglected to be installed or not is efficiently judged in real time, and neglected installation misjudgment caused by misdetection is avoided by means of recording the time length of the target parts in an accumulated mode.
It should be noted that, with reference to fig. 4, the procedure of the neglected loading part detection method of the present embodiment is as follows: firstly, acquiring monitoring video data of a factory packaging station, then identifying parts appearing in the video by a target detection module, wherein the parts at least comprise a front lining plate, a side lining plate, a specification and a bottom support, sending a detection result detected by the target detection module to a multi-target tracking module for target tracking, determining tracking track information of each target, then sending a tracking result (tracking track information) to a logic judgment module for neglected loading judgment, and finally outputting a result of whether parts are neglected to be loaded or not.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
In some embodiments, after the video of the part packaging process is acquired in step S101, the following steps are further performed:
and step S106, determining that the target part is not installed under the condition that the target part is not detected in the video frame of the video.
In some embodiments, the detecting the target part from the video frame of the video in step S102 is implemented by:
s102-1, detecting suspected target parts and the types thereof through a target detection algorithm;
and S102-2, verifying the category of the suspected target part through a classification algorithm, and determining the suspected target part as the target part under the condition that the verification is passed.
Through the steps S102-1 to S102-2, a suspected target part and the type thereof are detected through a target detection algorithm; and verifying the category of the suspected target part through a classification algorithm, and determining the suspected target part as the target part under the condition that the verification is passed. The problems of false detection and missed detection caused by adopting a single detection algorithm are solved.
In some embodiments, the detecting the target part from the video frame of the video in step S102 can be further implemented by:
detecting suspected target parts and the categories thereof through a target detection algorithm, and acquiring a first confidence score generated by the target detection algorithm;
verifying the category of the suspected target part through a classification algorithm, and acquiring a second confidence score generated by the classification algorithm;
under the condition that the class of the suspected target part is verified, determining a joint confidence coefficient of the suspected target part according to the first confidence coefficient score and the second confidence coefficient score;
and under the condition that the joint confidence is greater than a preset confidence threshold, determining the suspected target part as the target part.
Through the steps S102-3 to S102-6, a suspected target part and the category thereof are detected through a target detection algorithm, and a first confidence score generated by the target detection algorithm is obtained; verifying the category of the suspected target part through a classification algorithm, and acquiring a second confidence score generated by the classification algorithm; under the condition that the class of the suspected target part is verified, determining a joint confidence coefficient of the suspected target part according to the first confidence coefficient score and the second confidence coefficient score; and under the condition that the joint confidence is greater than a preset confidence threshold, determining the suspected target part as the target part. The problems of false detection and missed detection caused by adopting a single detection algorithm are solved.
FIG. 2 is a flow diagram of detecting a target part from a video frame of a video according to an embodiment of the present application. The specific process of detecting the target part from the video frame of the video in this embodiment may refer to the flow shown in fig. 2, and the specific implementation process is as follows:
the algorithm used for detecting the target part in the embodiment mainly comprises a target detection algorithm and a target classification algorithm based on deep learning.
Before the target part is detected, a video sequence packaged by a factory production line worker is collected, and an image obtained by acquiring a video frame is used as a training data set. The specific process is as follows: firstly, a training data set for target detection is made, and a packaging box and parts (a front lining plate, a side lining plate, a specification and a bottom support) only in the packaging box are marked as positive samples, so that the trained target detection model only detects the packaging box and the parts in the packaging box, parts outside the packaging box are not detected, the detected parts are more targeted, the subsequent neglected loading logic judgment is facilitated, and the judgment accuracy is improved.
And then, detecting the target part, wherein the specific process is as follows:
first, the package and the parts (front lining board, side lining board, instruction sheet, bottom bracket) in the package are detected by an object detection algorithm, and the coordinate frame and confidence score Conf of the object are obtainedOD(i.e., the first confidence score described above) and object typeod
Then, the coordinate frame of the detected part is sent to a target classification algorithm based on deep learning, wherein the algorithm is a five-classification algorithm, and the output type of the algorithm is typeocRespectively a front lining plate, a side lining plate, a specification, a bottom bracket and a background, and simultaneously, outputting confidence scores Conf of the categoryOC(i.e., the first confidence score described above). To be explainedThe classification is only the classification of the actual specific example, but is not limited to the classification in the adaptation set.
And when the result obtained by the classification algorithm is the background category, deleting the target without sending the target to the multi-target tracking module.
Judging a classification algorithm to obtain a target typeocTarget type obtained by detection algorithmodWhether they are consistent, only typeocAnd typeodAnd when the two are consistent, performing joint confidence calculation, wherein the specific formula is as follows:
Conf=αConfOD+(1-α)ConfOC
ConfODand ConfOCIn the specific design, α is set to be 0.5, and when the final result Conf is greater than 0.7, the detected target part is judged to be correct.
In some embodiments, the track of the target part is tracked in the video frame of the video in step S103, and the tracking track information is obtained by:
s103-1, tracking the target part by using a multi-target tracking algorithm to obtain a tracking track; wherein the number of the target parts is plural;
step S103-2, matching the tracking track with the existing tracking track; wherein the existing tracking track is the tracking track of the target part in the previous frame of video frame;
and step S103-3, under the condition that the tracking track is successfully matched with the existing tracking track, updating the tracking track into the tracking track information of the target part.
Tracking the target part by using a multi-target tracking algorithm through the steps S103-1 to S103-3 to obtain a tracking track; matching the tracking track with the existing tracking track; and under the condition that the tracking track is successfully matched with the existing tracking track, updating the tracking track into the tracking track information of the target part. The tracking of the target part track is realized.
In some embodiments, the track of the target part is tracked in the video frame of the video in step S103, and the following steps are further performed to obtain the tracking track information:
and S103-4, under the condition that the target part is not matched with the existing tracking track and/or the existing tracking track does not exist, judging that the target part is a new target part, and creating the tracking track as the tracking track information of the target part.
It should be noted that the multi-target Tracking technology for Tracking multiple target parts in the embodiment of the present application may be any multi-target Tracking algorithm in the related art, for example, a multi-target Tracking algorithm in a Tracking by Detection manner.
It should be noted that the multi-target tracking implementation process of the embodiment is further described as follows:
in the embodiment, a Tracking by Detection mode multi-target Tracking algorithm is adopted, track Tracking is performed on a target part detected from each frame of video frame image, the moving track of each target part is tracked, and then the current target part is matched with the existing Tracking track of the target part through a matching algorithm to form a new Tracking track of the target part. The target part has four state positions in the tracking process: create, Update, Lost, Delete.
The track matching process is to calculate the IOU (intersection ratio) of the detection coordinate frame of the current video frame and the tracking coordinate frame of the previous frame, and when the IOU is greater than a threshold value, the matching is successful; and when the detection coordinate frame can be matched with a plurality of tracking coordinate frames, taking the tracking coordinate frame with the largest IOU for matching. The following are encountered during the matching process: when the tracking track of the target part can be matched with the existing tracking track, updating the tracking track, wherein the state of the target part is Update; if the existing tracking track matched with the currently detected target part cannot be found, the target part is a new tracking target, a tracking track is created for the target part, the state bit of the tracking track is Create, a target number (ID) is given, and the accumulated track of the target is accumulated by counting the tracks of different target numbers of the same target part during logic judgment; if the corresponding detected target part does not exist in the current video frame corresponding to a certain tracking track, the tracked target part is Lost in the video, and the state bit of the tracked target part is Lost; and when the state bit of one target part is Lost and exceeds 12 frames, updating the state bit of the target part to Delete, and deleting the tracking track of the target part.
It should be noted that, when the status bit of the target component is Update, it is determined that the tracking track of the target component needs to be subjected to subsequent logical determination, and it is determined whether the component is missing through the track information and the ID.
In some embodiments, the step S104 of determining whether the time length of the target part appearing in the video is greater than a preset threshold according to the tracking track information is implemented by:
s104-4, acquiring the part type of the target part from the tracking track information;
step S104-5, judging whether the target part is a set target part or not according to the part type;
and step S104-6, accumulating the time lengths corresponding to all the tracking tracks of the target part according to the part types under the condition that the target part is the set target part.
Through the steps S104-1 to S104-3, the part type of the target part is obtained from the tracking track information; judging whether the target part is a set target part or not according to the part type; and accumulating the time lengths corresponding to all tracking tracks of the target part according to the part type under the condition that the target part is the set target part. By means of the mode of accumulative counting, the influence of false detection of individual frames on result judgment of the detection module can be avoided.
It should be noted that actually accumulating the time lengths corresponding to all tracking tracks of the target part according to the part category includes the following two cases: firstly, directly accumulating the time lengths corresponding to all tracks of the target part under the condition that one target part is adopted; secondly, when the target parts are multiple, the time length corresponding to all the tracks of each target part is accumulated for each target part.
In some embodiments, before determining whether the duration of the target part appearing in the video is greater than the preset threshold according to the tracking track information in step S104, the following steps are further performed:
step S104-1, detecting a packing box from the video frame;
s104-2, judging whether the packaging box is in a preset area in the video frame;
and S104-3, under the condition that the packaging box is located in the preset area in the video frame, judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information in the step S104.
In some embodiments, after determining whether the packing box is located in the preset area in the video frame, the following steps are further performed:
and under the condition that the packaging box is not in the preset area in the video frame, determining the target parts which are not neglected to be packaged and/or reporting the neglected to be packaged.
FIG. 3 is a flowchart of determining a part missing according to a tracking trace according to an embodiment of the present application. In this embodiment, whether the time length of the target part appearing in the video is greater than a preset threshold is determined according to the tracking track information, and when it is determined that the time length of the target part appearing in the video is not greater than the preset threshold, a specific process of determining that the target part is not installed may refer to the flow shown in fig. 3 and be described as follows:
(1) and obtaining the position information of the current frame according to the tracking track information, and then judging whether a packaging box exists.
(2) And (4) obtaining the coordinates of the central point of the packaging box through the position information of the current frame, judging whether the packaging box is in the set packaging area, if not, returning, and if so, continuing to judge the step (3).
(3) And (3) judging whether the tracking target part has a specified part type (such as a front lining plate, a side lining plate, a specification and a bottom bracket) according to the part type of the target part obtained from the tracking track information, if not, returning to the step (2), and if so, continuing to execute the step (4).
(4) Counting the number of frames of video frames existing in parts (a front lining plate, a side lining plate, a specification and a bottom bracket) in an accumulated mode, and accumulating target parts in the same part category after accumulating because a worker easily blocks the parts and the ID of the same part jumps.
(5) Judging whether the packing box leaves the packing area or not, judging whether the central point of the packing box crosses the left line of the packing area or not because the conveying direction of the packing box is from right to left, if not, returning to the step (4), and if so, continuing to execute the step (6); and (5) determining whether the packaging box is completely packaged or not, and judging that the packaging is completed when the packaging box leaves the packaging area.
(6) Counting whether the frame length of each type of part in the step (4) is larger than f or not, avoiding the error judgment of the result caused by the error detection of a certain frame through the result of multi-frame accumulation, if the frame length is larger than f, judging by an algorithm that the worker puts the type of part into the packaging box, and continuing to execute the step (7).
(7) And (4) judging the parts which do not meet the step (6) to be neglected-loading parts, and reporting and displaying the class of the neglected-loading target. For example: when the judgment of the parts (the front lining plate, the side lining plates, the specification and the bottom support) is required, and the algorithm judges that the front lining plate, the side lining plates and the bottom support meet the conditions, the class of the neglected loading is reported as the specification.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment also provides a neglected loading part detection device, which is used for implementing the above embodiments and preferred embodiments, and the description of the device is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a missing part detection apparatus according to an embodiment of the present application, and as shown in fig. 5, the apparatus includes:
the acquisition module 51 is used for acquiring a video of a part packaging process;
the target detection module 52 is coupled with the acquisition module 51 and is used for detecting a target part from a video frame of the video;
the multi-target tracking module 53 is coupled with the target detection module 52 and used for tracking the track of the target part in the video frame of the video to obtain tracking track information;
the logic judgment module 54 is coupled with the multi-target tracking module 53 and is used for judging whether the time length of the target part appearing in the video is greater than a preset threshold value according to the tracking track information;
and the processing module 55 is coupled with the logic judgment module 54, and determines that the target part is not installed under the condition that the time length of the target part appearing in the video is not greater than the preset threshold value.
In some of these embodiments, the apparatus further comprises:
and the first processing module is coupled to the target detection module 52 and configured to determine that the target part is missing from the video frame of the video if the target part is not detected.
In some of these embodiments, the object detection module 52 includes:
the first detection unit is used for detecting suspected target parts and the types of the suspected target parts through a target detection algorithm;
and the first verification unit is coupled with the first detection unit and used for verifying the category of the suspected target part through a classification algorithm and determining the suspected target part as the target part under the condition that the verification is passed.
In some of these embodiments, the object detection module 52 further includes:
the second detection unit is used for detecting suspected target parts and the types thereof through a target detection algorithm and acquiring a first confidence score generated by the target detection algorithm;
the second verification unit is coupled with the second detection unit and used for verifying the category of the suspected target part through a classification algorithm and acquiring a second confidence score generated by the classification algorithm;
the first calculation unit is coupled with the second verification unit and used for determining the joint confidence of the suspected target part according to the first confidence score and the second confidence score under the condition that the verification of the category of the suspected target part is passed;
and the first determining unit is coupled with the first calculating unit and used for determining the suspected target part as the target part under the condition that the joint confidence degree is greater than a preset confidence degree threshold value.
In some of these embodiments, the multi-target tracking module 53 includes:
the first tracking unit is used for tracking the target part by utilizing a multi-target tracking algorithm to obtain a tracking track; wherein the number of the target parts is plural;
the first matching unit is coupled with the first tracking unit and used for matching the tracking track with the existing tracking track; wherein the existing tracking track is the tracking track of the target part in the previous frame of video frame;
and the first updating unit is coupled with the first matching unit and used for updating the tracking track into the tracking track information of the target part under the condition that the tracking track is successfully matched with the existing tracking track.
In some of these embodiments, the apparatus further comprises:
and the first creating unit is coupled with the first matching unit and used for judging that the target part is a new target part and creating the tracking track as the tracking track information of the target part under the condition that the target part is not matched with the existing tracking track and/or the existing tracking track does not exist.
In some embodiments, the logic determining module 54 comprises:
the first acquisition unit is used for acquiring the part type of the target part from the tracking track information;
the first judging unit is coupled with the first acquiring unit and used for judging whether the target part is a set target part according to the part type;
and the second calculating unit is coupled with the first judging unit and used for accumulating the time lengths corresponding to all the tracking tracks of the target part according to the part types under the condition that the target part is the set target part.
In some embodiments, the logic determining module 54 further comprises:
a third detection unit for detecting the packing box from the video frame;
and the second judging unit is coupled with the third detecting unit and is used for judging whether the packaging box is positioned in a preset area in the video frame.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In addition, the method for neglected loading parts in the embodiment of the application described in conjunction with fig. 1 can be implemented by computer equipment. Fig. 6 is a hardware structure diagram of a computer device according to an embodiment of the present application.
The computer device may comprise a processor 61 and a memory 62 in which computer program instructions are stored.
Specifically, the processor 61 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 62 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 62 may include a Hard Disk Drive (Hard Disk Drive, abbreviated HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 62 may include removable or non-removable (or fixed) media, where appropriate. The memory 62 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 62 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 62 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (earrom) or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended Data Output Dynamic Random Access Memory (EDODRAM), a Synchronous Dynamic Random Access Memory (SDRAM), and the like.
The memory 62 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions executed by the processor 61.
The processor 61 may implement any of the missing part detection methods in the above embodiments by reading and executing computer program instructions stored in the memory 62.
In some of these embodiments, the computer device may also include a communication interface 63 and a bus 60. As shown in fig. 6, the processor 61, the memory 62, and the communication interface 63 are connected via a bus 60 to complete mutual communication.
The communication interface 63 is used for implementing communication between modules, devices, units and/or apparatuses in the embodiments of the present application. The communication interface 63 may also enable communication with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
Bus 60 comprises hardware, software, or both coupling the components of the computer device to each other. Bus 60 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 60 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a HyperTransport (HT) interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video electronics standards Association Local Bus (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 60 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The computer device can execute the neglected loading part detection method in the embodiment of the application based on the acquired video of the part packaging process, so that the neglected loading part detection method described in combination with fig. 1 is realized.
In addition, in combination with the neglected loading part detection method in the above embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the missing part detection methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A neglected loading part detection method is characterized by comprising the following steps:
acquiring a video of a part packaging process;
detecting a target part from a video frame of the video;
tracking the track of the target part in the video frame of the video to obtain tracking track information;
judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information;
and determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value.
2. The missing part detection method of claim 1 wherein after obtaining the video of the part packaging process, the method further comprises:
and determining that the target part is neglected to be installed under the condition that the target part is not detected from the video frame of the video.
3. The missing part detection method of claim 1 wherein detecting a target part from a video frame of the video comprises:
detecting suspected target parts and the types thereof through a target detection algorithm;
and verifying the category of the suspected target part through a classification algorithm, and determining the suspected target part as the target part under the condition that the verification is passed.
4. The missing part detection method of claim 1 wherein detecting a target part from a video frame of the video comprises:
detecting suspected target parts and the categories thereof through a target detection algorithm, and acquiring a first confidence score generated by the target detection algorithm;
verifying the category of the suspected target part through a classification algorithm, and acquiring a second confidence score generated by the classification algorithm;
determining a joint confidence of the suspected target part according to the first confidence score and the second confidence score when the verification of the category of the suspected target part passes;
and determining the suspected target part as the target part under the condition that the joint confidence is greater than a preset confidence threshold.
5. The missing part detection method of claim 1, wherein tracking the trajectory of the target part in the video frames of the video to obtain tracking trajectory information comprises:
tracking the target part by using a multi-target tracking algorithm to obtain a tracking track; wherein the number of the target parts is multiple;
matching the tracking track with the existing tracking track; wherein the existing tracking track is the tracking track of the target part in the previous frame of video frame;
and under the condition that the tracking track is successfully matched with the existing tracking track, updating the tracking track into the tracking track information of the target part.
6. The missing part detection method of claim 5, further comprising:
and under the condition that the target part is not matched with the existing tracking track and/or the existing tracking track does not exist, judging that the target part is a new target part, and establishing the tracking track as the tracking track information of the target part.
7. The neglected loading part detection method according to claim 1, wherein judging whether the time length of the target part appearing in the video is greater than a preset threshold value according to the tracking track information comprises:
acquiring the part type of the target part from the tracking track information;
judging whether the target part is a set target part or not according to the part type;
and under the condition that the target part is a set target part, accumulating the time lengths corresponding to all tracking tracks of the target part according to the part type.
8. The missing part detection method of claim 1, wherein before determining whether the time length of the target part appearing in the video is greater than a preset threshold according to the tracking track information, the method further comprises:
detecting a package from the video frame;
judging whether the packaging box is in a preset area in the video frame;
and under the condition that the packaging box is located in a preset area in the video frame, judging whether the time length of the target part appearing in the video is greater than the preset threshold value or not according to the tracking track information.
9. The missing part detection method of claim 8, wherein after determining whether the package is within a predetermined area of the video frame, the method further comprises: and under the condition that the packaging box is not in a preset area in the video frame, determining the target parts which are not neglected to be packaged and/or reporting the neglected to be packaged.
10. The utility model provides a neglected loading part detection device which characterized in that includes:
the acquisition module is used for acquiring a video of a part packaging process;
the target detection module is used for detecting a target part from a video frame of the video;
the multi-target tracking module is used for tracking the track of the target part in the video frame of the video to obtain tracking track information;
the logic judgment module is used for judging whether the time length of the target part appearing in the video is greater than a preset threshold value or not according to the tracking track information;
and the processing module is used for determining that the target part is not neglected to be installed under the condition that the time length of the target part appearing in the video is not greater than a preset threshold value.
11. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the missing part detection method of any of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the missing part detection method according to any one of claims 1 to 9.
CN202010390119.XA 2020-05-08 2020-05-08 Missing part detection method, device, computer equipment and storage medium Active CN111681208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010390119.XA CN111681208B (en) 2020-05-08 2020-05-08 Missing part detection method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010390119.XA CN111681208B (en) 2020-05-08 2020-05-08 Missing part detection method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111681208A true CN111681208A (en) 2020-09-18
CN111681208B CN111681208B (en) 2023-08-22

Family

ID=72433661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010390119.XA Active CN111681208B (en) 2020-05-08 2020-05-08 Missing part detection method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111681208B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738466A (en) * 2020-12-25 2021-04-30 浙江大华技术股份有限公司 Packaging accessory complete detection method, device and system
CN113392807A (en) * 2021-07-06 2021-09-14 华域视觉科技(上海)有限公司 System and method for identifying boxing missing

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130307974A1 (en) * 2012-05-17 2013-11-21 Canon Kabushiki Kaisha Video processing apparatus and method for managing tracking object
CN104221035A (en) * 2012-03-06 2014-12-17 A-1包装解决方案有限公司 A radio frequency identification system for tracking and managing materials in a manufacturing process
CN105046220A (en) * 2015-07-10 2015-11-11 华为技术有限公司 Multi-target tracking method, apparatus and equipment
CN106324685A (en) * 2016-09-05 2017-01-11 Tcl王牌电器(惠州)有限公司 Method and device for detecting neglected loading of accessory in packaging box
CN106846357A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 A kind of suspicious object detecting method and device
CN107203804A (en) * 2017-05-19 2017-09-26 苏州易信安工业技术有限公司 A kind of data processing method, apparatus and system
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
US20180144295A1 (en) * 2016-11-18 2018-05-24 ATC Logistic & Electronics, Inc. Loss prevention tracking system and methods
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN109003026A (en) * 2018-07-11 2018-12-14 上海南软信息科技有限公司 Cargo transport status tracking method, apparatus and medium based on the interconnection of object object
CN110040470A (en) * 2019-05-21 2019-07-23 精英数智科技股份有限公司 A kind of monitoring method of artificial intelligence video identification belt deviation
US20190244030A1 (en) * 2018-02-07 2019-08-08 Hitachi, Ltd. Object tracking in video using better object area
CN110310273A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Equipment core detecting method, device and electronic equipment in industry assembling scene
CN110309779A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Assemble monitoring method, device and the electronic equipment of part operational motion in scene
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104221035A (en) * 2012-03-06 2014-12-17 A-1包装解决方案有限公司 A radio frequency identification system for tracking and managing materials in a manufacturing process
US20130307974A1 (en) * 2012-05-17 2013-11-21 Canon Kabushiki Kaisha Video processing apparatus and method for managing tracking object
CN105046220A (en) * 2015-07-10 2015-11-11 华为技术有限公司 Multi-target tracking method, apparatus and equipment
CN106324685A (en) * 2016-09-05 2017-01-11 Tcl王牌电器(惠州)有限公司 Method and device for detecting neglected loading of accessory in packaging box
US20180144295A1 (en) * 2016-11-18 2018-05-24 ATC Logistic & Electronics, Inc. Loss prevention tracking system and methods
CN106846357A (en) * 2016-12-15 2017-06-13 重庆凯泽科技股份有限公司 A kind of suspicious object detecting method and device
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN107203804A (en) * 2017-05-19 2017-09-26 苏州易信安工业技术有限公司 A kind of data processing method, apparatus and system
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
US20190244030A1 (en) * 2018-02-07 2019-08-08 Hitachi, Ltd. Object tracking in video using better object area
CN109003026A (en) * 2018-07-11 2018-12-14 上海南软信息科技有限公司 Cargo transport status tracking method, apparatus and medium based on the interconnection of object object
CN110040470A (en) * 2019-05-21 2019-07-23 精英数智科技股份有限公司 A kind of monitoring method of artificial intelligence video identification belt deviation
CN110310273A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Equipment core detecting method, device and electronic equipment in industry assembling scene
CN110309779A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Assemble monitoring method, device and the electronic equipment of part operational motion in scene
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112738466A (en) * 2020-12-25 2021-04-30 浙江大华技术股份有限公司 Packaging accessory complete detection method, device and system
CN113392807A (en) * 2021-07-06 2021-09-14 华域视觉科技(上海)有限公司 System and method for identifying boxing missing
CN113392807B (en) * 2021-07-06 2022-11-29 华域视觉科技(上海)有限公司 System and method for identifying neglected loading of boxed goods

Also Published As

Publication number Publication date
CN111681208B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111078908A (en) Data annotation detection method and device
CN110561416B (en) Laser radar repositioning method and robot
US8705814B2 (en) Apparatus and method for detecting upper body
CN111681208A (en) Neglected loading part detection method and device, computer equipment and storage medium
US10867393B2 (en) Video object detection
CN109147341A (en) Violation vehicle detection method and device
CN111915549A (en) Defect detection method, electronic device and computer readable storage medium
CN113240880A (en) Fire detection method and device, electronic equipment and storage medium
CN113379999A (en) Fire detection method and device, electronic equipment and storage medium
CN110307617B (en) Heat exchanger, filth blockage detection method, device and system thereof, and electrical equipment
CN105302715B (en) The acquisition methods and device of application program user interface
CN110796129A (en) Text line region detection method and device
CN116990768A (en) Predicted track processing method and device, electronic equipment and readable medium
CN116940963A (en) Detection device, control method for detection device, model generation method by model generation device for generating learned model, information processing program, and recording medium
CN112214629B (en) Loop detection method based on image recognition and movable equipment
CN113870754A (en) Method and system for judging defects of panel detection electronic signals
CN112560765A (en) Pedestrian flow statistical method, system, equipment and storage medium based on pedestrian re-identification
KR102232797B1 (en) Object identification apparatus, method thereof and computer readable medium having computer program recorded therefor
CN104268901A (en) High-speed moving object detection processing method and system based on linear array image sensor
CN111368624A (en) Loop detection method and device based on generation of countermeasure network
US20230394824A1 (en) Detection of reflection objects in a sequence of image frames
CN111597959B (en) Behavior detection method and device and electronic equipment
CN109885771B (en) Application software screening method and service equipment
CN115909153A (en) Method, device, system and medium for determining abnormal event of pedestrian
CN116416543A (en) Illegal behavior detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant