CN112560986A - Image detection method and device, electronic equipment and storage medium - Google Patents

Image detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112560986A
CN112560986A CN202011559951.4A CN202011559951A CN112560986A CN 112560986 A CN112560986 A CN 112560986A CN 202011559951 A CN202011559951 A CN 202011559951A CN 112560986 A CN112560986 A CN 112560986A
Authority
CN
China
Prior art keywords
area
escalator
state
image
elevator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011559951.4A
Other languages
Chinese (zh)
Other versions
CN112560986B (en
Inventor
林少波
曾星宇
赵瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202011559951.4A priority Critical patent/CN112560986B/en
Priority to CN202111579927.1A priority patent/CN114283305A/en
Publication of CN112560986A publication Critical patent/CN112560986A/en
Priority to JP2022532078A priority patent/JP2023510477A/en
Priority to PCT/CN2021/101619 priority patent/WO2022134504A1/en
Priority to KR1020227018450A priority patent/KR20220095218A/en
Application granted granted Critical
Publication of CN112560986B publication Critical patent/CN112560986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B25/00Control of escalators or moving walkways
    • B66B25/003Methods or algorithms therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Escalators And Moving Walkways (AREA)

Abstract

The present disclosure relates to an image detection method and apparatus, an electronic device, and a storage medium, the method including: acquiring a first image of the escalator; carrying out area detection on the first image, and determining a first area of the escalator in the first image; performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator, wherein the state recognition result comprises that the escalator is in an empty elevator state or in a non-empty elevator state; and determining the state of the escalator according to the at least one state recognition result. The embodiment of the disclosure can improve the identification accuracy of the running state of the escalator.

Description

Image detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image detection method and apparatus, an electronic device, and a storage medium.
Background
With the development of economy and the continuous improvement of infrastructure construction, the application of the escalator in markets, office buildings, public transportation and other scenes is more and more extensive. During the daily operation of the escalator, people need to strictly manage the area where the escalator is located so as to ensure the operation of the escalator and avoid causing accidents. In the related art, an automatic detection scheme for an elevator area usually realizes the starting and stopping operation of an elevator based on infrared induction or gravity induction, so that false recognition is easily generated, and the detection effect is poor.
Disclosure of Invention
The present disclosure provides an image detection technical scheme.
According to an aspect of the present disclosure, there is provided an image detection method including: acquiring a first image of the escalator; carrying out area detection on the first image, and determining a first area of the escalator in the first image; performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator, wherein the state recognition result comprises that the escalator is in an empty elevator state or in a non-empty elevator state; and determining the state of the escalator according to the at least one state recognition result.
In one possible implementation, the determining the state of the escalator according to the at least one state recognition result includes: determining a state judgment value of the escalator according to a plurality of state identification results and weights of the state identification results under the condition that the number of the state identification results is multiple; and determining that the escalator is in an empty state under the condition that the state discrimination value is greater than or equal to a first threshold value.
In one possible implementation, the state recognition result includes a first state recognition result, the performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator includes: and classifying the first area image to obtain a first state identification result of the escalator.
In one possible implementation, the state recognition result includes a second state recognition result, the performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator includes: dividing the first area image into a background area and a foreground area where the escalator is located; adjusting the pixel value of the background area to obtain an adjusted second area image; and classifying the second area image to obtain a second state identification result of the escalator.
In one possible implementation, the state recognition result includes a third state recognition result, the performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator includes: performing pixel matching on the first area image and a preset reference image, and determining the matching area ratio between the first area image and the reference image, wherein the reference image comprises an area image corresponding to the escalator in an empty elevator state; and under the condition that the ratio of the matching areas is greater than or equal to a second threshold value, determining that the third state identification result is that the escalator is in an empty state.
In one possible implementation, the state recognition result includes a fourth state recognition result, and the performing elevator state recognition on a first area image corresponding to the first area to determine at least one state recognition result of the escalator includes: performing first target detection on the first area image, and determining whether a first target exists in the first area image; and under the condition that the first target does not exist in the first area image, determining that the fourth state identification result is that the escalator is in an empty state.
In one possible implementation, the method further includes: and sending an elevator shutdown signal under the condition that the escalator is in an empty elevator state, wherein the elevator shutdown signal is used for indicating the escalator to stop running.
In one possible implementation, the method further includes: and sending an elevator starting signal under the condition that the escalator is in a non-empty state and the escalator stops running, wherein the elevator starting signal is used for indicating the escalator to run.
In one possible implementation, the method further includes: performing second target detection on the first image, and determining a third area of a second target in the first image; determining a detection result of the second target according to the position relation between the first area and the third area, wherein the detection result comprises that the second target is on the escalator or not.
In a possible implementation manner, the determining a detection result of the second target according to a position relationship between the first area and the third area includes: determining that the second target is on the escalator as the detection result when an area ratio between a fourth area and the third area is greater than or equal to a third threshold, the fourth area including an intersection area between the first area and the third area.
In one possible implementation, the second objective includes inhibiting access to an escalator, the method further comprising: sending an alert message if the second target is on the escalator.
According to an aspect of the present disclosure, there is provided an image detection apparatus including: the image acquisition module is used for acquiring a first image of the escalator; the area detection module is used for carrying out area detection on the first image and determining a first area of the escalator in the first image; the state identification module is used for carrying out elevator state identification on a first area image corresponding to the first area and determining at least one state identification result of the escalator, wherein the state identification result comprises that the escalator is in an empty elevator state or in a non-empty elevator state; and the state determining module is used for determining the state of the escalator according to the at least one state identification result.
In one possible implementation, the state determination module includes: the discrimination value determining submodule is used for determining the state discrimination value of the escalator according to a plurality of state recognition results and the weights of the state recognition results under the condition that the state recognition results are multiple; and the state determining submodule is used for determining that the escalator is in an empty state under the condition that the state discrimination value is greater than or equal to a first threshold value.
In one possible implementation manner, the state recognition result includes a first state recognition result, and the state recognition module includes: and the first result determining submodule is used for carrying out classification processing on the first area image to obtain a first state identification result of the escalator.
In one possible implementation manner, the state recognition result includes a second state recognition result, and the state recognition module includes: the segmentation submodule is used for segmenting the first area image and segmenting the first area image into a background area and a foreground area where the escalator is located; the pixel adjusting submodule is used for adjusting the pixel value of the background area to obtain an adjusted second area image; and the second result determining submodule is used for classifying the second area image to obtain a second state identification result of the escalator.
In one possible implementation manner, the state recognition result includes a third state recognition result, and the state recognition module includes: the pixel matching sub-module is used for performing pixel matching on the first area image and a preset reference image and determining the matching area ratio between the first area image and the reference image, and the reference image comprises an area image corresponding to the escalator in an empty elevator state; and the third result determining submodule is used for determining that the third state identification result is that the escalator is in an empty state under the condition that the occupation ratio of the matching area is greater than or equal to a second threshold value.
In a possible implementation manner, the state recognition result includes a fourth state recognition result, and the state recognition module includes: the detection submodule is used for carrying out first target detection on the first area image and determining whether a first target exists in the first area image; and the fourth result determining submodule is used for determining that the fourth state identification result is that the escalator is in an empty state under the condition that the first target does not exist in the first area image.
In one possible implementation, the apparatus further includes: and the shutdown signal sending module is used for sending an elevator shutdown signal under the condition that the escalator is in an empty elevator state, and the elevator shutdown signal is used for indicating the escalator to stop running.
In one possible implementation, the apparatus further includes: the starting signal sending module is used for sending an elevator starting signal under the condition that the escalator is in a non-empty escalator state and the escalator stops running, and the elevator starting signal is used for indicating the escalator to run.
In one possible implementation, the apparatus further includes: the target detection module is used for carrying out second target detection on the first image and determining a third area of a second target in the first image; and the detection result determining module is used for determining the detection result of the second target according to the position relation between the first area and the third area, and the detection result comprises that the second target is positioned on the escalator or not positioned on the escalator.
In one possible implementation manner, the detection result determining module includes: a determination submodule configured to determine that the detection result is that the second target is on the escalator when an area ratio between a fourth area and the third area is greater than or equal to a third threshold, where the fourth area includes an intersection area between the first area and the third area.
In one possible implementation, the second objective includes inhibiting access to an escalator article, the apparatus further comprising: and the warning information sending module is used for sending warning information under the condition that the second target is positioned on the escalator.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to the embodiment of the disclosure, the area where the escalator is located in the image can be detected, at least one state recognition result is recognized according to the area image of the escalator, and the empty elevator state or the non-empty elevator state of the escalator is determined according to the at least one state recognition result, so that the positioning accuracy of the escalator area can be improved, and the identification accuracy of the running state of the escalator is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image detection method according to an embodiment of the present disclosure.
Fig. 2a and 2b illustrate schematic diagrams of region detection of an image detection method according to an embodiment of the present disclosure.
Fig. 3 shows a schematic view of a first region according to an embodiment of the present disclosure.
Fig. 4a and 4b show schematic diagrams of a second object according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram illustrating a processing procedure of an image detection method according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an image detection apparatus according to an embodiment of the present disclosure.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
FIG. 8 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In recent years, industry + Artificial Intelligence (AI) has become one of the targets of new-era industrial development based on the artificial intelligence wave of deep learning. The new fields of intelligent factories, intelligent stores, intelligent agriculture and the like are endless, wherein AI is used for industrial application of security protection, namely intelligent security protection, and the application is particularly hot. In the field of public safety, the escalator is a common travel tool in modern life and is also a public place requiring strict regulation and regulation management. The purpose of wisdom elevator is to promote the work efficiency of administrative department and strengthen the supervision to elevator daily operation. In the related art, only a partial detection function for the escalator can be realized, and a complete intelligent elevator solution is not provided.
The image detection method can be applied to scenes such as shopping malls, office buildings, public transportation and the like, and is based on a deep learning mode, images or video streams of an area where an escalator is located in the scene are processed and analyzed, so that functions such as positioning of an elevator area, distinguishing of an empty elevator state from a non-empty elevator state of the elevator, detection and identification of key targets (such as objects such as a baby carriage, a wheelchair, a luggage case and the like) can be realized, a complete intelligent elevator solution is constructed, the detection effect is improved, the elevator running cost is reduced, and the risk of safety accidents is reduced.
Fig. 1 illustrates a flowchart of an image detection method according to an embodiment of the present disclosure, as illustrated in fig. 1, the image detection method including:
in step S11, a first image of the escalator is acquired;
in step S12, performing area detection on the first image, and determining a first area of the escalator in the first image;
in step S13, performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator, wherein the state recognition result includes whether the escalator is in an empty state or in a non-empty state;
in step S14, the state of the escalator is determined based on the at least one state recognition result.
In a possible implementation manner, the image detection method may be executed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
For example, at least one image capturing device, such as at least one camera facing the escalator, may be disposed at the position where the escalator to be detected is located, so as to capture the video stream of the escalator and detect the escalator in the video stream picture, the object (such as a pedestrian, an object carried by the pedestrian, etc.) riding on the escalator. The installation position of the image acquisition device, the acquisition mode of the video stream and the specific area corresponding to the video stream are not limited in the disclosure.
In one possible implementation, in step S11, a first image of the escalator may be acquired. Each image frame of the video stream may be taken as a first image; or sampling the video stream at a certain time interval, and taking the sampled image frame as a first image; key frames in the video stream may also be acquired as the first image. The present disclosure does not limit the manner in which the first image is acquired.
In one possible implementation, in step S12, the first image may be subjected to area detection by the trained area detection network, and the first area of the escalator in the first image is determined. The area detection network may be, for example, a convolutional neural network, and the present disclosure does not limit the specific network structure and training mode of the area detection network.
Fig. 2a and 2b illustrate schematic diagrams of region detection of an image detection method according to an embodiment of the present disclosure. Figure 2a shows a first image of an escalator; fig. 2b shows the result of the region detection in the first image.
In one possible implementation, the area detection network may employ a target detection model or an area segmentation model. When the target detection model is used, the first area detected may be a rectangular detection frame of the area where the escalator is located, such as the detection frame 21 in fig. 2 b; when using the zone segmentation model, the first zone detected may be an irregular zone where the escalator is located, such as zone 22 in fig. 2 b. The present disclosure does not limit the specific network model employed by the area detection network.
When the target detection model is adopted for processing, the detection frame can reduce the background area as much as possible while including the complete elevator area, thereby reducing the influence of background noise on the subsequent processing process. As shown in fig. 2b, when the elevator is shifted to the left, the lower left corner of the dark elevator area is used as the reference, and is used as the lower left vertex coordinate of the detection frame 21; the coordinates of the top right corner of the dark elevator area are used as the coordinates of the top right corner of the detection frame 21. When the region division model is used for processing, dark elevator regions in the detection frame can be divided, and a polygon, such as the region 22 in fig. 2b, can be drawn with the elevator handrail as a boundary.
In a possible implementation manner, after obtaining the first area, in step S13, a first area image corresponding to the first area may be cut from the first image, and the state of the escalator, that is, whether the escalator is in an empty state or a non-empty state, may be identified according to the first area image. Wherein, when no pedestrian and/or article exist on the escalator, the escalator is in an empty state; when pedestrians and/or articles exist on the escalator, the escalator is in a non-empty ladder state.
In a possible implementation manner, the first area images may be respectively identified through at least one identification manner, so as to obtain corresponding state identification results. The identification means may for example comprise at least one of: directly classifying the first area image; adjusting the pixel value of a background area in the first area image, and classifying the adjusted image; comparing the first area image with a reference image of an empty elevator; whether the elevator has objects such as pedestrians or not is detected. The present disclosure is not limited to the specific identification methods and the number thereof.
In one possible implementation, in step S14, the state of the escalator may be determined according to at least one state recognition result. For example, the state of the escalator is comprehensively judged according to the values and weights of the state recognition results by referring to a voting mechanism. When the state identification result is that the overall weight of the empty elevator state is higher, the escalator is judged to be in the empty elevator state; otherwise, the escalator is judged to be in a non-empty state.
In one possible implementation, in the case that the escalator is in an empty state, an escalator shutdown signal can be sent to instruct the escalator to stop running; when the escalator is in a non-empty state and stops running, an escalator starting signal can be sent to indicate the escalator to start running. The present disclosure is not limited to the specific operations performed in different elevator states.
According to the image detection method disclosed by the embodiment of the disclosure, the area where the escalator is located in the image can be detected, at least one state recognition result is recognized according to the area image of the escalator, and the empty elevator state or the non-empty elevator state of the escalator is determined according to the at least one state recognition result, so that the positioning accuracy of the escalator area can be improved, and the identification accuracy of the running state of the escalator can be improved.
The following is a description of an image detection method according to an embodiment of the present disclosure.
As described above, the video stream of the area where the escalator is located may be collected by the camera, and the collected video stream may be transmitted to the local electronic device such as the front server or the cloud server. The electronic device may decode the video stream to obtain a decoded video stream.
In step S11, a first image of the escalator may be acquired. The first image may be an image frame of a decoded video stream. In step S12, area detection is performed on the first image by the trained area detection network, and a first area of the escalator in the first image, for example, a detection frame of the area where the escalator is located, is determined.
In step S13, a first area image corresponding to the first area may be cut out from the first image, and the first area image may be identified by at least one identification method, so as to obtain at least one state identification result.
Fig. 3 shows a schematic view of a first region according to an embodiment of the present disclosure. As shown in fig. 3, the first image includes first areas 31 and 32 of two elevators, and the first area images corresponding to the first areas 31 and 32 may be respectively intercepted and the two first area images may be identified in parallel, thereby improving the processing efficiency.
In one possible implementation, the state recognition result includes a first state recognition result. Step S13 may include:
and classifying the first area image to obtain a first state identification result of the escalator.
That is, the first region image may be directly classified by a trained classification network (referred to herein as a first classification network). The first classification network may be, for example, a convolutional neural network, which includes a convolutional layer, a full link layer, an activation layer, and the like, and the disclosure does not limit the specific network structure and training manner of the first classification network.
In one possible implementation, the first area image may be input into the first classification network, and the first state recognition result is output, indicating that the escalator is in an empty elevator state or in a non-empty elevator state, for example, 1 is output when the escalator is in the empty elevator state, and 0 is output when the escalator is in the non-empty elevator state. In this way, the state of the escalator can be recognized simply and efficiently.
In one possible implementation, the state recognition result includes a second state recognition result. Step S13 may include:
dividing the first area image into a background area and a foreground area where the escalator is located;
adjusting the pixel value of the background area to obtain an adjusted second area image;
and classifying the second area image to obtain a second state identification result of the escalator.
For example, in the case that the first area is a rectangular detection frame, the first area image corresponding to the first area may be segmented by the trained segmentation network, and the first area image is segmented into a background area and a foreground area where the escalator is located. The segmentation network may be, for example, a convolutional neural network, which includes a convolutional layer, a fully-connected layer, an activation layer, and the like, and the disclosure does not limit the specific network structure and training manner of the segmentation network.
In a possible implementation manner, in the case that the first area is an irregular area where the escalator is located, an area image corresponding to a circumscribed rectangular frame of the first area may be used as the first area image. The first area can be directly used as a foreground area; and the area except the first area in the first area image is used as a background area, so that the first area image is segmented.
In one possible implementation manner, the pixel values of the pixels in the background area may be adjusted, for example, the pixel values of the background area are all adjusted to zero (black), so as to obtain an adjusted second area image. The pixel values of the background area may also be adjusted to other values, which is not limited by this disclosure.
In one possible implementation, the second region images may be classified by a trained classification network (referred to herein as a second classification network). The second classification network may be, for example, a convolutional neural network including convolutional layers, fully-connected layers, active layers, etc., and may have the same network structure as the first classification network, but different network parameters. The present disclosure does not limit the specific network structure and training manner of the second classification network.
In one possible implementation, the second area image is input into the second classification network, and the second state recognition result is output, which indicates that the escalator is in an empty elevator state or in a non-empty elevator state, for example, 1 is output when the escalator is in the empty elevator state, and 0 is output when the escalator is in the empty elevator state. In this way, the accuracy of elevator state identification can be improved.
In one possible implementation, the state recognition result includes a third state recognition result. Step S13 may include:
performing pixel matching on the first area image and a preset reference image, and determining the matching area ratio between the first area image and the reference image, wherein the reference image comprises an area image corresponding to the escalator in an empty elevator state;
and under the condition that the ratio of the matching areas is greater than or equal to a second threshold value, determining that the third state identification result is that the escalator is in an empty state.
For example, when the escalator is in an empty state, a single or multiple area images corresponding to the escalator can be acquired, for example, the image of the escalator is acquired through a camera, and the area image is obtained by performing area detection on the image.
In one possible implementation, if a single region image is acquired, the region image may be used as a reference image; if a plurality of area images are acquired, the plurality of area images can be fused to obtain a reference image (also called an empty elevator template). For example, through long-time image acquisition of 1-2 days, labels of all pixels in an elevator area are obtained by using a segmentation model, and then an empty elevator template is obtained through a Gaussian mixture model. The present disclosure does not limit the manner in which the reference image is generated.
In one possible implementation, the reference image may be saved in a database. When the elevator state is identified, the first area image and the reference image can be subjected to pixel matching, and the number of matched pixels is determined; and determining the matching area ratio between the first area image and the reference image according to the ratio of the matched pixel number to the total pixel number.
In one possible implementation, if the matching area occupancy is greater than or equal to the second threshold, it may be determined that the third status recognition result is that the escalator is in an empty state; on the contrary, if the matching area ratio is smaller than the second threshold value, it may be determined that the third state recognition result is that the escalator is in a non-empty state. The skilled person in the art may set the second threshold, for example, 0.8, according to the actual situation, and the specific value of the second threshold is not limited in this disclosure.
Through the mode of pixel matching, can improve elevator state identification's efficiency.
In one possible implementation, the state recognition result includes a fourth state recognition result. Step S13 may include:
performing first target detection on the first area image, and determining whether a first target exists in the first area image;
and under the condition that the first target does not exist in the first area image, determining that the fourth state identification result is that the escalator is in an empty state.
For example, a first target detection may be performed on the first area image by a trained target detection network (which may be referred to as a first detection network), and it is determined whether the first target is present in the first area image. The first target may, for example, include a pedestrian, an item, etc., to which the present disclosure is not limited.
In a possible implementation manner, the first detection network may be, for example, a convolutional neural network, and the disclosure does not limit a specific network structure and a training manner of the first detection network.
In one possible implementation, if the first target exists in the first area image, it may be determined that the fourth state recognition result is that the escalator is in a non-empty state; on the contrary, if the first target does not exist in the first area image, it may be determined that the fourth state recognition result is that the escalator is in an empty state. By the method, the diversity of elevator state identification modes can be improved.
After the respective state recognition results are determined, the state of the escalator may be determined based on the respective state recognition results in step S14. Wherein, the step S14 may include: determining a state judgment value of the escalator according to a plurality of state identification results and weights of the state identification results under the condition that the number of the state identification results is multiple;
and determining that the escalator is in an empty state under the condition that the state discrimination value is greater than or equal to a first threshold value.
For example, if the state identification result is one, the state of the escalator can be directly determined according to the state identification result; if the state recognition results are multiple, the state of the escalator can be comprehensively judged according to the multiple state recognition results.
In a possible implementation manner, the weight of each state recognition result may be preset, the weight of the state recognition result with higher accuracy is set to be higher, the weight of the state recognition result with lower accuracy is set to be lower, and the sum of the weights of the state recognition results is 1.
In one possible implementation manner, the state discrimination value of the escalator can be determined according to a plurality of state recognition results and the weights of the state recognition results by referring to a voting mechanism. The state recognition result may include at least two of the first, second, third and fourth state recognition results.
For example, when the state recognition results include the first, second, third, and fourth state recognition results a1, a2, a3, and a4, the state discrimination values w1 a1+ w 2a 2+ w3 a3+ w 4a 4 are obtained by setting the weights of the first, second, third, and fourth state recognition results to w1, w2, w3, and w4, respectively. Wherein w1+ w2+ w3+ w4 is 1, and each state identification result outputs 1 when the escalator is in an empty ladder state and 0 when the escalator is in a non-empty ladder state.
In one possible implementation manner, if the state discrimination value is greater than or equal to a preset first threshold value, the escalator can be considered to be in an empty state; on the contrary, if the state discrimination value is smaller than the preset first threshold value, the escalator can be considered to be in a non-empty state. As shown in fig. 3, the escalator in the area 31 is in an empty state, and the escalator in the area 32 is in a non-empty state.
The person skilled in the art can set the first threshold value, for example, 0.5, according to practical situations, and the specific value of the first threshold value is not limited by the present disclosure.
The mode of judging through the voting mechanism can accurately identify the running state of the escalator, and obviously reduce the error rate of elevator state judgment.
In one possible implementation manner, the image detection method according to the embodiment of the present disclosure may further include:
and sending an elevator shutdown signal under the condition that the escalator is in an empty elevator state, wherein the elevator shutdown signal is used for indicating the escalator to stop running.
That is, if it is determined in step S14 that the escalator is in an empty state, an escalator stoppage signal may be generated and transmitted to instruct the escalator to stop running. For example, an elevator shutdown signal is sent to the elevator control device, so that the elevator control device controls the escalator to stop running; and the elevator stop signal can be sent to the working personnel so that the working personnel control the escalator to stop running. The present disclosure does not limit the type and transmission manner of the elevator shutdown signal.
In this way, the operation can be stopped when the elevator is empty, thereby reducing the operation cost of the elevator.
In one possible implementation manner, the image detection method according to the embodiment of the present disclosure may further include:
and sending an elevator starting signal under the condition that the escalator is in a non-empty state and the escalator stops running, wherein the elevator starting signal is used for indicating the escalator to run.
That is, if it is determined in step S14 that the escalator is in a non-empty state and the escalator has stopped operating, i.e., a pedestrian rides on the escalator during a stoppage, an escalator start signal may be generated and transmitted to indicate the operation of the escalator. In a similar manner to that described above, an elevator start signal may be sent to the elevator control device to cause the elevator control device to control the escalator to run; and an elevator starting signal can be sent to a worker, so that the worker can control the escalator to run. The present disclosure does not limit the type and transmission manner of the elevator starting signal.
By the method, the elevator can be started when someone takes the elevator and the elevator stops, so that the normal use of the elevator is ensured.
In one possible implementation manner, the image detection method according to the embodiment of the present disclosure may further include:
performing second target detection on the first image, and determining a third area of a second target in the first image;
determining a detection result of the second target according to the position relation between the first area and the third area, wherein the detection result comprises that the second target is on the escalator or not.
For example, after the first image of the escalator is acquired in step S11, the second target detection may be performed on the first image through a trained target detection network (which may be referred to as a second detection network) to determine whether the second target exists in the first image. The second object may include items that inhibit access to the escalator, such as strollers, wheelchairs, large luggage cases, and the like, and the present disclosure is not limited as to the type of second object.
In a possible implementation manner, the second detection network may be, for example, a convolutional neural network, and the disclosure does not limit a specific network structure and a training manner of the second detection network.
In one possible implementation, if the second object is present in the first image, the second detection network may determine a region of the second object in the first image (which may be referred to as a third region). Fig. 4a and 4b show schematic diagrams of a second object according to an embodiment of the present disclosure. The second object in fig. 4a is a stroller and the second object in fig. 4b is a luggage compartment. As shown in fig. 4a and 4b, the detection frame where the second object is located, i.e., the third area, may be determined.
In one possible implementation, according to the first area of the escalator in the first image, the detection result of the second target can be determined through the position relationship between the first area and the third area. That is, whether the second target is on the escalator is judged according to the position relation.
In a possible implementation manner, the step of determining the detection result of the second target according to the position relationship between the first area and the third area may include:
determining that the second target is on the escalator as the detection result when an area ratio between a fourth area and the third area is greater than or equal to a third threshold, the fourth area including an intersection area between the first area and the third area.
That is, the judgment can be made by finding the way of intersection ratio IOU. If the fourth zone comprises the intersection zone between the first zone and the third zone, the area ratio between the fourth zone and the third zone, i.e. the area of the intersection zone of the zone in which the second target is located and the elevator zone/the area of the zone in which the second target is located, can be found.
In one possible implementation, if the area ratio is greater than or equal to a preset third threshold, it may be determined that the detection result is that the second target is on the escalator; on the contrary, if the area ratio is smaller than the preset third threshold, it may be determined that the second target is not on the escalator. The skilled person can set the third threshold value, for example, 0.6, according to the practical situation, and the specific value of the third threshold value is not limited by the present disclosure.
In this way, the determination of the area in which the second object is located can be achieved.
In a possible implementation, the second detection network may also directly perform the second target detection on the first area image of the elevator area to determine whether the second target is present in the first area image. If a second target is present in the first zone image, it may be determined that the second target is on the escalator. The present disclosure is not limited to the particular process employed.
In one possible implementation manner, the second object includes an article prohibited from entering the escalator, and the image detection method according to the embodiment of the present disclosure may further include: sending an alert message if the second target is on the escalator.
That is, for an article prohibited from entering the escalator, if it is determined in the foregoing step that the second target is on the escalator, a warning message may be sent to remind or directly control the escalator to stop running.
For example, the warning message can be sent to the elevator monitoring equipment and/or staff members of the monitoring room, so that the staff members control the escalator to stop running and/or go to the elevator for processing; the alarm information can also be sent to the elevator control equipment so that the elevator control equipment controls the escalator to stop running. The present disclosure does not limit the type and transmission manner of the alarm information.
In a possible implementation, the second target may be alarmed when it first appears on the elevator, and the subsequent consecutive same targets are alarmed repeatedly at specified intervals to avoid sending the alarm information too frequently.
By the mode, when articles entering the escalator are forbidden to enter the escalator, the escalator can give an alarm in time, so that the risk of safety accidents is reduced.
Fig. 5 is a schematic diagram illustrating a processing procedure of an image detection method according to an embodiment of the present disclosure. As shown in fig. 5, during the process, an image of the escalator may be input at an image input step 51; a step 52 of elevator positioning and a step 53 of key target detection are respectively carried out on the images; determining an elevator zone 54 in the image by the step 52 of elevator positioning; and then, the elevator is determined to be empty/non-empty through an empty elevator judging step 55, and corresponding processing is executed.
In the example, the location 56 of a key target in the image is determined by the step 53 of key target detection, i.e. an item prohibited from entering the elevator (an offending item); determining whether a key target is on the elevator based on the elevator zone 54 and the key target location 56; and if the key target is on the elevator, alarming the illegal object.
In this way, the entire processing procedure of the image detection method according to the embodiment of the present disclosure can be realized.
According to the image detection method disclosed by the embodiment of the disclosure, an entrance of an intelligent elevator solution can be constructed based on elevator area positioning; in an elevator area, judging whether an elevator is an empty elevator or not by utilizing a multi-path parallel empty elevator judging method and a result voting mechanism, and improving the accuracy of empty elevator prediction; the elevator is alarmed through empty elevator, and the management department is guided to reduce the running speed of the elevator and even stop running, so that the safety and convenience of travel are guaranteed while the energy consumption is reduced; based on a target detection algorithm, key targets in an elevator area are positioned, an alarm is given for illegal objects, and the supervision of management departments is improved.
According to the image detection method disclosed by the embodiment of the disclosure, elevator areas can be positioned based on a deep learning target detection/segmentation model mode, and multi-target detection is supported to simultaneously position a plurality of elevator areas. Moreover, the two area definition methods provided by the method can reduce the interference of background noise while ensuring complete coverage of the elevator area. In the related art smart elevator solutions, there is no solution at all concerning elevator positioning.
According to the image detection method disclosed by the embodiment of the disclosure, whether the elevator is empty or not can be judged by combining a result voting mechanism based on a multi-path parallel empty elevator state judgment method, the error rate of output results can be greatly reduced, and meanwhile, multi-region parallel classification is supported. In the related technical scheme, the empty elevator state of the elevator is judged by a single method, and the accuracy is low.
According to the image detection method disclosed by the embodiment of the disclosure, a specific illegal object is detected by training the deep neural network, all key targets in a visual field range are identified, the problem that the illegal objects such as a baby carriage, a wheelchair and a large luggage case cannot be detected in the related technology is solved, and based on an elevator area and a key target area, whether the illegal object gets on the elevator or not is judged by adopting a specific IOU calculation method, so that the judgment accuracy is improved.
According to the image detection method disclosed by the embodiment of the disclosure, an intelligent elevator system based on automatic positioning and key target detection is provided, a set of complete and stable intelligent elevator solution can be formed, and the intelligent elevator system can be applied to all current public elevator scenes.
The method can be applied to an intelligent camera, elevator area positioning and empty elevator discrimination are carried out on an elevator scene, energy consumption is reduced when the elevator is in an empty elevator state, the elevator is empty before the elevator stops, and the working efficiency of a management department is improved; the method has the advantages that elevator area positioning and key target detection are carried out on an elevator scene, illegal articles on the elevator are alarmed, and the supervision of management departments is strengthened.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image detection methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 6 illustrates a block diagram of an image detection apparatus according to an embodiment of the present disclosure, which includes, as illustrated in fig. 6:
the image acquisition module 61 is used for acquiring a first image of the escalator;
the area detection module 62 is configured to perform area detection on the first image, and determine a first area of the escalator in the first image;
the state identification module 63 is configured to perform elevator state identification on a first area image corresponding to the first area, and determine at least one state identification result of the escalator, where the state identification result includes that the escalator is in an empty elevator state or in a non-empty elevator state;
a state determination module 64 for determining the state of the escalator based on the at least one state identification.
In one possible implementation, the state determination module includes: the discrimination value determining submodule is used for determining the state discrimination value of the escalator according to a plurality of state recognition results and the weights of the state recognition results under the condition that the state recognition results are multiple; and the state determining submodule is used for determining that the escalator is in an empty state under the condition that the state discrimination value is greater than or equal to a first threshold value.
In one possible implementation manner, the state recognition result includes a first state recognition result, and the state recognition module includes: and the first result determining submodule is used for carrying out classification processing on the first area image to obtain a first state identification result of the escalator.
In one possible implementation manner, the state recognition result includes a second state recognition result, and the state recognition module includes: the segmentation submodule is used for segmenting the first area image and segmenting the first area image into a background area and a foreground area where the escalator is located; the pixel adjusting submodule is used for adjusting the pixel value of the background area to obtain an adjusted second area image; and the second result determining submodule is used for classifying the second area image to obtain a second state identification result of the escalator.
In one possible implementation manner, the state recognition result includes a third state recognition result, and the state recognition module includes: the pixel matching sub-module is used for performing pixel matching on the first area image and a preset reference image and determining the matching area ratio between the first area image and the reference image, and the reference image comprises an area image corresponding to the escalator in an empty elevator state; and the third result determining submodule is used for determining that the third state identification result is that the escalator is in an empty state under the condition that the occupation ratio of the matching area is greater than or equal to a second threshold value.
In a possible implementation manner, the state recognition result includes a fourth state recognition result, and the state recognition module includes: the detection submodule is used for carrying out first target detection on the first area image and determining whether a first target exists in the first area image; and the fourth result determining submodule is used for determining that the fourth state identification result is that the escalator is in an empty state under the condition that the first target does not exist in the first area image.
In one possible implementation, the apparatus further includes: and the shutdown signal sending module is used for sending an elevator shutdown signal under the condition that the escalator is in an empty elevator state, and the elevator shutdown signal is used for indicating the escalator to stop running.
In one possible implementation, the apparatus further includes: the starting signal sending module is used for sending an elevator starting signal under the condition that the escalator is in a non-empty escalator state and the escalator stops running, and the elevator starting signal is used for indicating the escalator to run.
In one possible implementation, the apparatus further includes: the target detection module is used for carrying out second target detection on the first image and determining a third area of a second target in the first image; and the detection result determining module is used for determining the detection result of the second target according to the position relation between the first area and the third area, and the detection result comprises that the second target is positioned on the escalator or not positioned on the escalator.
In one possible implementation manner, the detection result determining module includes: a determination submodule configured to determine that the detection result is that the second target is on the escalator when an area ratio between a fourth area and the third area is greater than or equal to a third threshold, where the fourth area includes an intersection area between the first area and the third area.
In one possible implementation, the second objective includes inhibiting access to an escalator article, the apparatus further comprising: and the warning information sending module is used for sending warning information under the condition that the second target is positioned on the escalator.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable codes, and when the computer readable codes are run on a device, a processor in the device executes instructions for implementing the image detection method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the image detection method provided in any one of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 7 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 7, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 8 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 8, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate based on storage inOperating system for memory 1932, such as Microsoft Server operating system (Windows Server)TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. An image detection method, comprising:
acquiring a first image of the escalator;
carrying out area detection on the first image, and determining a first area of the escalator in the first image;
performing elevator state recognition on a first area image corresponding to the first area, and determining at least one state recognition result of the escalator, wherein the state recognition result comprises that the escalator is in an empty elevator state or in a non-empty elevator state;
and determining the state of the escalator according to the at least one state recognition result.
2. The method of claim 1, wherein said determining the condition of the escalator based on the at least one condition identification comprises:
determining a state judgment value of the escalator according to a plurality of state identification results and weights of the state identification results under the condition that the number of the state identification results is multiple;
and determining that the escalator is in an empty state under the condition that the state discrimination value is greater than or equal to a first threshold value.
3. The method of claim 1, wherein the state recognition result comprises a first state recognition result,
the elevator state recognition of the first area image corresponding to the first area and the determination of at least one state recognition result of the escalator comprise:
and classifying the first area image to obtain a first state identification result of the escalator.
4. The method of claim 1, wherein the state recognition result comprises a second state recognition result,
the elevator state recognition of the first area image corresponding to the first area and the determination of at least one state recognition result of the escalator comprise:
dividing the first area image into a background area and a foreground area where the escalator is located;
adjusting the pixel value of the background area to obtain an adjusted second area image;
and classifying the second area image to obtain a second state identification result of the escalator.
5. The method of claim 1, wherein the state recognition result comprises a third state recognition result,
the elevator state recognition of the first area image corresponding to the first area and the determination of at least one state recognition result of the escalator comprise:
performing pixel matching on the first area image and a preset reference image, and determining the matching area ratio between the first area image and the reference image, wherein the reference image comprises an area image corresponding to the escalator in an empty elevator state;
and under the condition that the ratio of the matching areas is greater than or equal to a second threshold value, determining that the third state identification result is that the escalator is in an empty state.
6. The method of claim 1, wherein the state recognition result comprises a fourth state recognition result,
the elevator state recognition of the first area image corresponding to the first area and the determination of at least one state recognition result of the escalator comprise:
performing first target detection on the first area image, and determining whether a first target exists in the first area image;
and under the condition that the first target does not exist in the first area image, determining that the fourth state identification result is that the escalator is in an empty state.
7. The method of claim 1, further comprising:
and sending an elevator shutdown signal under the condition that the escalator is in an empty elevator state, wherein the elevator shutdown signal is used for indicating the escalator to stop running.
8. The method of claim 1, further comprising:
and sending an elevator starting signal under the condition that the escalator is in a non-empty state and the escalator stops running, wherein the elevator starting signal is used for indicating the escalator to run.
9. The method of claim 1, further comprising:
performing second target detection on the first image, and determining a third area of a second target in the first image;
determining a detection result of the second target according to the position relation between the first area and the third area, wherein the detection result comprises that the second target is on the escalator or not.
10. The method according to claim 9, wherein the determining a detection result of the second target according to the positional relationship between the first region and the third region includes:
determining that the second target is on the escalator as the detection result when an area ratio between a fourth area and the third area is greater than or equal to a third threshold, the fourth area including an intersection area between the first area and the third area.
11. The method of claim 9 or 10, wherein the second objective includes inhibiting access to an escalator article, the method further comprising:
sending an alert message if the second target is on the escalator.
12. An image detection apparatus, characterized by comprising:
the image acquisition module is used for acquiring a first image of the escalator;
the area detection module is used for carrying out area detection on the first image and determining a first area of the escalator in the first image;
the state identification module is used for carrying out elevator state identification on a first area image corresponding to the first area and determining at least one state identification result of the escalator, wherein the state identification result comprises that the escalator is in an empty elevator state or in a non-empty elevator state;
and the state determining module is used for determining the state of the escalator according to the at least one state identification result.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 11.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 11.
CN202011559951.4A 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium Active CN112560986B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202011559951.4A CN112560986B (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium
CN202111579927.1A CN114283305A (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium
JP2022532078A JP2023510477A (en) 2020-12-25 2021-06-22 Image detection method and device, electronic device, and storage medium
PCT/CN2021/101619 WO2022134504A1 (en) 2020-12-25 2021-06-22 Image detection method and apparatus, electronic device, and storage medium
KR1020227018450A KR20220095218A (en) 2020-12-25 2021-06-22 Image detection method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011559951.4A CN112560986B (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111579927.1A Division CN114283305A (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112560986A true CN112560986A (en) 2021-03-26
CN112560986B CN112560986B (en) 2022-01-04

Family

ID=75034245

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011559951.4A Active CN112560986B (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium
CN202111579927.1A Withdrawn CN114283305A (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111579927.1A Withdrawn CN114283305A (en) 2020-12-25 2020-12-25 Image detection method and device, electronic equipment and storage medium

Country Status (4)

Country Link
JP (1) JP2023510477A (en)
KR (1) KR20220095218A (en)
CN (2) CN112560986B (en)
WO (1) WO2022134504A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114036987A (en) * 2021-11-12 2022-02-11 上海擎朗智能科技有限公司 Escalator detection method and device, mobile equipment and storage medium
CN114155483A (en) * 2021-11-11 2022-03-08 鸿富锦精密电子(郑州)有限公司 Monitoring alarm method, device, storage medium and computer equipment
WO2022134504A1 (en) * 2020-12-25 2022-06-30 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363575B (en) * 2023-02-15 2023-11-03 南京诚勤教育科技有限公司 Classroom monitoring management system based on wisdom campus

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699878A (en) * 2013-12-09 2014-04-02 安维思电子科技(广州)有限公司 Method and system for recognizing abnormal operation state of escalator
CN104724566A (en) * 2013-12-24 2015-06-24 株式会社日立制作所 Elevator having image recognition function
CN107273852A (en) * 2017-06-16 2017-10-20 华南理工大学 Escalator floor plates object and passenger behavior detection algorithm based on machine vision
CN107416630A (en) * 2017-09-05 2017-12-01 广州日滨科技发展有限公司 The detection method and system of the improper closing of elevator
CN107832730A (en) * 2017-11-23 2018-03-23 高域(北京)智能科技研究院有限公司 Improve the method and face identification system of face recognition accuracy rate
US20180349413A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Application And System Providing Indoor Searching Of A Venue
CN109353907A (en) * 2017-09-05 2019-02-19 日立楼宇技术(广州)有限公司 The security prompt method and system of elevator operation
CN110342357A (en) * 2019-05-24 2019-10-18 深圳壹账通智能科技有限公司 A kind of elevator scheduling method, device, computer equipment and storage medium
US20190339841A1 (en) * 2018-05-07 2019-11-07 Otis Elevator Company Equipment service graphical interface
CN110427741A (en) * 2019-07-31 2019-11-08 Oppo广东移动通信有限公司 Fingerprint identification method and Related product
CN111325188A (en) * 2020-03-24 2020-06-23 通力电梯有限公司 Method for monitoring escalator and device for monitoring escalator
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111913857A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method and device for detecting operation behavior of intelligent equipment
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN112102407A (en) * 2020-09-09 2020-12-18 北京市商汤科技开发有限公司 Display equipment positioning method and device, display equipment and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108639921A (en) * 2018-07-05 2018-10-12 江苏瑞奇海力科技有限公司 A kind of staircase passenger safety prior-warning device and method
CN110852253A (en) * 2019-11-08 2020-02-28 杭州宇泛智能科技有限公司 Ladder control scene detection method and device and electronic equipment
CN111924695A (en) * 2020-07-09 2020-11-13 上海市隧道工程轨道交通设计研究院 Intelligent safety protection system for subway escalator and working method of intelligent safety protection system
CN111807204A (en) * 2020-07-27 2020-10-23 苏州雷格特智能设备股份有限公司 Intelligent elevator safety monitoring system
CN112560986B (en) * 2020-12-25 2022-01-04 上海商汤智能科技有限公司 Image detection method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699878A (en) * 2013-12-09 2014-04-02 安维思电子科技(广州)有限公司 Method and system for recognizing abnormal operation state of escalator
CN104724566A (en) * 2013-12-24 2015-06-24 株式会社日立制作所 Elevator having image recognition function
US20180349413A1 (en) * 2017-06-02 2018-12-06 Apple Inc. Application And System Providing Indoor Searching Of A Venue
CN107273852A (en) * 2017-06-16 2017-10-20 华南理工大学 Escalator floor plates object and passenger behavior detection algorithm based on machine vision
CN109353907A (en) * 2017-09-05 2019-02-19 日立楼宇技术(广州)有限公司 The security prompt method and system of elevator operation
CN107416630A (en) * 2017-09-05 2017-12-01 广州日滨科技发展有限公司 The detection method and system of the improper closing of elevator
CN107832730A (en) * 2017-11-23 2018-03-23 高域(北京)智能科技研究院有限公司 Improve the method and face identification system of face recognition accuracy rate
US20190339841A1 (en) * 2018-05-07 2019-11-07 Otis Elevator Company Equipment service graphical interface
CN110342357A (en) * 2019-05-24 2019-10-18 深圳壹账通智能科技有限公司 A kind of elevator scheduling method, device, computer equipment and storage medium
CN110427741A (en) * 2019-07-31 2019-11-08 Oppo广东移动通信有限公司 Fingerprint identification method and Related product
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111325188A (en) * 2020-03-24 2020-06-23 通力电梯有限公司 Method for monitoring escalator and device for monitoring escalator
CN111913857A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method and device for detecting operation behavior of intelligent equipment
CN112102407A (en) * 2020-09-09 2020-12-18 北京市商汤科技开发有限公司 Display equipment positioning method and device, display equipment and computer storage medium
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
X DING等: "The passenger flow status identification based on image and WiFi detection for urban rail transit stations", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *
何成等: "基于AI图像识别与功能安全的自动扶梯智能监控***及相关安全标准要求", 《中国电梯》 *
王慧星等: "部分遮挡人脸识别的方法综述", 《武汉大学学报(理学版)》 *
赵海文等: "基于YOLO模型的机器人电梯厅门装箱状态快速识别方法", 《包装工程》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134504A1 (en) * 2020-12-25 2022-06-30 上海商汤智能科技有限公司 Image detection method and apparatus, electronic device, and storage medium
CN114155483A (en) * 2021-11-11 2022-03-08 鸿富锦精密电子(郑州)有限公司 Monitoring alarm method, device, storage medium and computer equipment
CN114036987A (en) * 2021-11-12 2022-02-11 上海擎朗智能科技有限公司 Escalator detection method and device, mobile equipment and storage medium
CN114036987B (en) * 2021-11-12 2024-05-31 上海擎朗智能科技有限公司 Staircase detection method and device, mobile equipment and storage medium

Also Published As

Publication number Publication date
CN114283305A (en) 2022-04-05
KR20220095218A (en) 2022-07-06
JP2023510477A (en) 2023-03-14
CN112560986B (en) 2022-01-04
WO2022134504A1 (en) 2022-06-30
WO2022134504A9 (en) 2022-07-21

Similar Documents

Publication Publication Date Title
CN112560986B (en) Image detection method and device, electronic equipment and storage medium
CN109829501B (en) Image processing method and device, electronic equipment and storage medium
CN113011290A (en) Event detection method and device, electronic equipment and storage medium
US9055202B1 (en) Doorbell communication systems and methods
EP3163498B1 (en) Alarming method and device
US20210166040A1 (en) Method and system for detecting companions, electronic device and storage medium
CN113538407B (en) Anchor point determining method and device, electronic equipment and storage medium
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN108600656B (en) Method and device for adding face label in video
US20150029334A1 (en) Doorbell communication systems and methods
CN113011291A (en) Event detection method and device, electronic equipment and storage medium
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
CN110969115B (en) Pedestrian event detection method and device, electronic equipment and storage medium
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
US10692364B1 (en) Security systems integration
CN112633184A (en) Alarm method and device, electronic equipment and storage medium
CN114187498A (en) Occlusion detection method and device, electronic equipment and storage medium
CN113920492A (en) Method and device for detecting people in vehicle, electronic equipment and storage medium
CN112464898A (en) Event detection method and device, electronic equipment and storage medium
CN113486759B (en) Dangerous action recognition method and device, electronic equipment and storage medium
CN111680646A (en) Motion detection method and device, electronic device and storage medium
CN111753611A (en) Image detection method, device and system, electronic equipment and storage medium
CN110543928B (en) Method and device for detecting number of people on trackless rubber-tyred vehicle
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN113505674B (en) Face image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40039758

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant