CN111931734A - Method and device for identifying lost object, vehicle-mounted terminal and storage medium - Google Patents

Method and device for identifying lost object, vehicle-mounted terminal and storage medium Download PDF

Info

Publication number
CN111931734A
CN111931734A CN202011023368.1A CN202011023368A CN111931734A CN 111931734 A CN111931734 A CN 111931734A CN 202011023368 A CN202011023368 A CN 202011023368A CN 111931734 A CN111931734 A CN 111931734A
Authority
CN
China
Prior art keywords
user
vehicle
predefined
video
vehicle cabin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011023368.1A
Other languages
Chinese (zh)
Inventor
徐子健
刘国清
杨一泓
郑伟
杨广
徐涵
周滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Minieye Innovation Technology Co Ltd
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202011023368.1A priority Critical patent/CN111931734A/en
Publication of CN111931734A publication Critical patent/CN111931734A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The application relates to the technical field of computer vision, and provides a method, a device, a vehicle-mounted terminal and a storage medium for identifying a lost object, wherein the method comprises the following steps: after detecting that no person exists in the vehicle cabin, acquiring a video shot aiming at the interior of the vehicle cabin; inputting the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user; if the lost object detector detects that the video contains an image corresponding to an object matched with the object type predefined by the user, the lost predefined object in the vehicle cabin of the user is identified, semantic-based recognition and detection of the lost object are realized, and the detection precision of the lost object is improved.

Description

Method and device for identifying lost object, vehicle-mounted terminal and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for identifying a missing object, a vehicle-mounted terminal, and a storage medium.
Background
With the continuous improvement of the software and hardware capabilities of computers and the improvement of the requirements of people on driving safety, the monitoring technology in the vehicle cabin is concerned widely. The lost articles in the vehicle cabin are also an important task, and the property of drivers/passengers can be effectively protected from being lost. The detection of the articles left in the vehicle cabin mainly detects whether some articles left in the vehicle after the driver or the passenger leaves the vehicle cabin.
However, in the conventional method for detecting objects left in the vehicle cabin, the judgment is generally performed based on the difference between the frame images before and after the vehicle is got in, and the detection method is easy to cause false detection and missing detection.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a vehicle-mounted terminal, and a storage medium for recognizing a fallen object in view of the above technical problems.
A method of identifying a lost object, the method comprising:
after detecting that no person exists in the vehicle cabin, acquiring a video shot aiming at the interior of the vehicle cabin;
inputting the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and if the lost object detector detects that the video contains images corresponding to objects matched with the object classes predefined by the user, identifying that the user loses the predefined objects in the vehicle cabin.
In one embodiment, after the recognizing that the user has left the predefined object in the vehicle cabin, the method further comprises:
if the user-predefined object category belongs to a static object category, inputting the image to a pre-constructed static object classifier when the user-predefined object category includes a plurality of sub-categories under the static object category, so that the static object classifier identifies the sub-category to which the object belongs among the plurality of sub-categories.
In one embodiment, after the recognizing that the user has left the predefined object in the vehicle cabin, the method further comprises:
if the object class predefined by the user belongs to a dynamic object class, inputting the image to a pre-constructed living body detector so that the living body detector detects whether the object is a living body;
if so, identifying the object as a living animal.
In one embodiment, the inputting the image to a pre-constructed liveness detector comprises:
detecting whether the motion characteristics of the object are presented in the video;
and if so, inputting the image into the living body detector.
In one embodiment, the method further comprises:
and if the object class to which the object belongs to the alarm object class predefined by the user, outputting a missing object alarm prompt to the user.
In one embodiment, before outputting the alert reminder for the missing object to the user, the method further comprises:
and responding to the situation that the user enters the vehicle cabin, providing a left article setting interface and prompting the user to enter an alarm object category aiming at the driving travel at the time on the provided left article setting interface.
In one embodiment, the method further comprises:
when a person is detected in the vehicle cabin, triggering and detecting whether the person is in the vehicle cabin at intervals of preset time; the preset time is set according to the estimated time of the user leaving the vehicle cabin.
An apparatus for identifying a missing object, the apparatus comprising:
the video acquisition module in the vehicle cabin is used for acquiring a video shot aiming at the interior of the vehicle cabin after detecting that no person is in the vehicle cabin;
the object detection module is used for inputting the video to a pre-constructed fallen object detector so as to enable the fallen object detector to detect whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and the object identification module is used for identifying that the user loses the predefined object in the vehicle cabin if the lost object detector detects that the video contains an image corresponding to the object matched with the object type predefined by the user.
An in-vehicle terminal comprising a memory and a processor, the memory storing a computer program, the processor implementing the following method when executing the computer program:
after detecting that no person exists in the vehicle cabin, acquiring a video shot aiming at the interior of the vehicle cabin;
inputting the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and if the lost object detector detects that the video contains images corresponding to objects matched with the object classes predefined by the user, identifying that the user loses the predefined objects in the vehicle cabin.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of:
after detecting that no person exists in the vehicle cabin, acquiring a video shot aiming at the interior of the vehicle cabin;
inputting the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and if the lost object detector detects that the video contains images corresponding to objects matched with the object classes predefined by the user, identifying that the user loses the predefined objects in the vehicle cabin.
According to the method, the device, the vehicle-mounted terminal and the storage medium for identifying the lost object, after the fact that no person is in the vehicle cabin is detected, the lost object detector is used for detecting whether the video contains the image of the object predefined by the user, and then whether the user loses the predefined object in the vehicle cabin is identified, the semantic-based identification and detection of the lost object are achieved, and the detection accuracy of the lost object is improved.
Drawings
FIG. 1 is a diagram of an environment in which a method for identifying a missing object may be used in one embodiment;
FIG. 2 is a schematic flow chart diagram of a method for identifying a missing object in one embodiment;
FIG. 3 is a schematic flow chart of a method for identifying a missing object in another embodiment;
FIG. 4 is a schematic flow chart diagram illustrating a method for identifying a missing object in yet another embodiment;
FIG. 5 is a diagram of a landing object placement interface in one embodiment;
FIG. 6 is a block diagram of an apparatus for identifying a missing object in one embodiment;
fig. 7 is an internal configuration diagram of the in-vehicle terminal in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In the traditional method for detecting the lost object based on the frame picture difference between the front of the vehicle and the rear of the vehicle, whether the lost object exists in the vehicle cabin or not can be identified only, and whether the lost object is lost by a user or not can not be identified, namely, whether the lost object is the lost object of the user or not can not be accurately identified. The method for identifying the lost object utilizes deep learning methods such as the lost object detector and the like to identify and detect the semantic meaning, can accurately identify the object predefined by the user, and improves the detection precision of the lost object.
In some embodiments, a camera may be mounted on a reflector, a roof, or the like of the automobile, the camera may be a color camera or a near-infrared camera, and the shooting area may cover a seat or the like in the cabin. The algorithm network such as the lost object detector can be deployed on an embedded platform of the vehicle-mounted terminal or a server at the cloud end. As shown in fig. 1, if the algorithm networks such as the missing object detector are deployed in the cloud server, when the missing object detection is performed, the vehicle-mounted terminal may send the video in the vehicle cabin, which is collected by the camera, to the cloud server, and after the cloud server receives the video in the vehicle cabin, the cloud server detects whether the user has left a predefined object in the vehicle cabin by using the algorithm networks such as the pre-deployed missing object detector.
Now, taking an example that a lost object detector is deployed in a vehicle-mounted terminal, a method for identifying a lost object provided by the present application is introduced: after no person exists in the vehicle cabin, the vehicle-mounted terminal acquires a video shot aiming at the interior of the vehicle cabin and inputs the video to a pre-constructed left object detector so as to enable the left object detector to detect whether the video contains an image corresponding to an object matched with an object type predefined by a user; if the lost object detector detects that the video contains the image corresponding to the object matched with the object type predefined by the user, the vehicle-mounted terminal identifies that the user loses the predefined object in the vehicle cabin.
In one embodiment, as shown in fig. 2, a method for identifying a fallen object is provided, which is described by taking an example that the method is applied to a vehicle-mounted terminal, and includes the following steps:
step S201, after the vehicle-mounted terminal detects that no person is in the vehicle cabin, a video shot aiming at the interior of the vehicle cabin is obtained.
The video shot in the vehicle cabin can be understood as a video (also referred to as an object detection video) input into the left-falling object detector for detecting the left-falling object; in one case, if the presence or absence of a person in the vehicle compartment is detected by the in-vehicle terminal based on the camera shooting the original video of the vehicle compartment interior, the object detection video may be derived from the original video. Specifically, after obtaining an original video shot by a camera, the vehicle-mounted terminal inputs the original video into a human body detector which is constructed in advance, so that the human body detector detects whether the original video contains a human image; if the original video does not contain the manned image, the onboard terminal can determine that the vehicle cabin is unmanned, and the original video is used as an object detection video to be further input into the lost object detector.
Further, when detecting that a person exists, the vehicle-mounted terminal may preset time to trigger detection of whether the person exists in the vehicle cabin, where the preset time may be set according to estimated time of the user leaving the vehicle cabin, where the estimated time may be preset (e.g., 30 seconds), or may be set according to time required by the driving trip. That is to say, the vehicle-mounted terminal detects whether someone again at intervals under the condition of someone, and therefore calculation power is saved.
Furthermore, if the fact that the video contains the person image is detected, the vehicle-mounted terminal determines that a person exists in the vehicle cabin, does not use the video as the video input to the lost object detector, detects the lost object and enters a dormant state; after the time in the dormant state reaches the preset time, the vehicle-mounted terminal can reacquire the video after the preset time, input the video after the preset time to the human body detector, and trigger the human body detector again to detect whether the video after the preset time contains the human image; if the video detected by the human body detector after the preset time does not contain the manned image, the vehicle-mounted terminal determines that no person is in the vehicle cabin, and the video after the preset time is used as the video input to the lost object detector.
That is to say, the vehicle-mounted terminal judges whether a person exists through the video, if the person exists, the missed object detector is not started, and the vehicle-mounted terminal is restarted after sleeping for a period of time (for example, 30 seconds), so that the identification and detection of the object can not be started under the condition of the person, and the calculation is saved.
Step S202, the vehicle-mounted terminal inputs the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user.
Step S203, if the left object detector detects that the video includes an image corresponding to an object matching the object type predefined by the user, the vehicle-mounted terminal identifies that the user has left the predefined object in the vehicle cabin.
The method is introduced by taking the object class predefined by the user as a mobile phone: after the vehicle-mounted terminal detects that no person exists in the vehicle cabin, the acquired video in the vehicle cabin is input to the lost object detector for object detection, so that the lost object detector can detect whether an image containing a mobile phone exists in the video. If the lost object detector detects that the video contains the image of the mobile phone and the image is consistent with the object type predefined by the user, the vehicle-mounted terminal can determine that the mobile phone is lost in the vehicle cabin.
According to the method for identifying the fallen objects, after no person is detected in the vehicle cabin, whether the image of the object predefined by the user is included in the video is detected through the fallen object detector, and then whether the predefined object is fallen by the user in the vehicle cabin is identified, so that the semantic-based identification and detection of the fallen objects are realized, and the detection precision of the fallen objects is improved.
In one embodiment, after the vehicle-mounted terminal recognizes that the user loses a predefined object in the vehicle cabin through the lost object detector, if the object is a static object, the vehicle-mounted terminal can further perform fine classification, for example, if the object is glasses, the vehicle-mounted terminal can further subdivide into sunglasses or myopia glasses; if the object is a dynamic object, the in-vehicle terminal can further perform live body detection to determine whether the object is an actual live animal or an animal in the in-vehicle poster/photo frame/flat video.
Specifically, after step S203, if the object class predefined by the user belongs to the static object class, when the object class predefined by the user includes a plurality of sub-classes under the static object class, the in-vehicle terminal inputs the image to a pre-constructed static object classifier, so that the static object classifier identifies the sub-class to which the object belongs among the plurality of sub-classes.
Illustratively, if the object detected by the missing object detector is glasses, and the glasses can be further subdivided into sunglasses and myopic glasses in the user's pre-definition, at this time, the in-vehicle terminal can input the image to the static object classifier, so that the static object classifier identifies whether the object is sunglasses or myopic glasses.
Specifically, after step S203, if the object class predefined by the user belongs to the dynamic object class, the in-vehicle terminal inputs an image to a pre-constructed liveness detector so that the liveness detector detects whether the object is a live body; if yes, the vehicle-mounted terminal identifies that the object is a living animal.
Illustratively, if the object detected by the missing object detector is a feature having a cat, the in-vehicle terminal may input an image to a pre-constructed liveness detector so that the liveness detector detects whether the object is a live body or a picture of a cat on the in-vehicle poster; if the living body detector detects a living body, the in-vehicle terminal can recognize that the object left by the user is a cat.
Further, the vehicle-mounted terminal may detect whether the video shows a motion characteristic of an object, that is, the vehicle-mounted terminal may detect whether the object is moving by using the continuous video stream; in this case, in order to avoid video interference of the tablet device, the in-vehicle terminal inputs an image to the living body detector for living body recognition after detecting that a motion feature of an object appears in the video.
In one embodiment, in order to ensure that the user can know the information of the fallen object in real time, after step S203, if the vehicle-mounted terminal determines that the object class to which the object belongs to the alarm object class predefined by the user, the vehicle-mounted terminal outputs a fallen object alarm prompt to the user.
Further, before outputting the alarm prompt of the fallen object to the user, the vehicle-mounted terminal can prompt the user to enter the class of the alarm object; specifically, in response to the user entering the vehicle cabin, the vehicle-mounted terminal provides a left article setting interface and prompts the user to enter an alarm object category for the current driving route on the provided left article setting interface.
For example, the in-vehicle terminal may provide a mobile terminal of the user with a lost article setting interface as shown in fig. 5, and the user may perform a custom setting on an article category that needs to be alarmed, where the lost article setting interface may be a web page or an interface of an application program. The vehicle-mounted terminal establishes an alarm category list according to alarm object categories set by a user, if the video has predefined objects with semantics or categories needing to be alarmed, the vehicle-mounted terminal outputs alarm reminding of missing objects to mobile terminals of the user such as a mobile phone/a smart watch, and if the video does not have the predefined objects with semantics or categories needing to be alarmed, the vehicle-mounted terminal does not output the alarm reminding of the missing objects to the mobile terminals of the user.
Further, if the alarm object category for the current driving trip entered by the user is not received within the set time (e.g., within 5 minutes), the in-vehicle terminal may use a default object alarm category preset by the user as the alarm object category for the current driving trip to perform the missing object detection for the user.
Or, if the vehicle-mounted terminal does not receive the alarm object category for the current driving route, which is input by the user, within the set time, the vehicle-mounted terminal can remind the user to input the alarm object category in a mode of equipment vibration or voice broadcast.
Or, if the vehicle-mounted terminal does not receive the alarm object category for the current driving route, which is input by the user, within the set time, and the time interval between the last time the alarm object category is input by the user and the time to be input is detected to be less than the preset interval time (such as 1 hour), the vehicle-mounted terminal can directly use the last time the alarm object category is input by the user as the alarm object category for the current driving route.
In an embodiment, the description with reference to fig. 3 provides another method for identifying a fallen object, which may be applied to a vehicle-mounted terminal, where the vehicle-mounted terminal may perform the following steps when performing the identification of the fallen object:
step S301, the vehicle-mounted terminal calls a pre-constructed human body detector to detect a vehicle cabin video, provides a left falling object setting interface for a user and prompts the user to input an alarm object type aiming at the driving travel on the provided left falling object setting interface after detecting that the user enters the vehicle cabin;
step S302, after the preset time, the vehicle-mounted terminal calls the human body detector again to detect the vehicle cabin video to judge whether a person exists in the vehicle cabin, and if the human body detector detects that the person exists in the vehicle cabin, the vehicle-mounted terminal calls the human body detector again at intervals of the preset time to detect the vehicle cabin video to judge whether the person exists in the vehicle cabin; the preset time is set according to the estimated time of the user leaving the vehicle cabin;
step S303, after the vehicle-mounted terminal determines that no person is in the vehicle cabin according to the detection result of the human body detector, the video shot in the vehicle cabin is used as the video input to the pre-constructed detector for the objects left in the vehicle;
step S304, the vehicle-mounted terminal inputs the video to the lost object detector so that the lost object detector detects whether the video contains an image corresponding to an object matched with an object type predefined by a user;
step S305, if the lost object detector detects that the video contains an image corresponding to an object matched with the object type predefined by the user, the vehicle-mounted terminal identifies the user to lose the predefined object in the vehicle cabin;
step S306, if the object class predefined by the user belongs to the static object class, when the object class predefined by the user comprises a plurality of subcategories under the static object class, the vehicle-mounted terminal inputs the image into a pre-constructed static object classifier so that the static object classifier identifies the subcategories to which the objects belong in the plurality of subcategories;
step S307, if the object type predefined by the user belongs to the dynamic object type, the vehicle-mounted terminal detects whether the motion characteristics of the object are presented in the video;
step S308, if the motion characteristics of the object are presented, the vehicle-mounted terminal inputs the image to the living body detector so that the living body detector detects whether the object is a living body;
step S309, if the living body detector detects that the object is a living body, the vehicle-mounted terminal identifies that the object is a living animal;
and S310, if the object type of the object belongs to the alarm object type predefined by the user, the vehicle-mounted terminal outputs a missing object alarm prompt to the user.
In the embodiment, after the human body detector detects that the user enters the vehicle cabin, the vehicle-mounted terminal reminds the user to define the category of the fallen objects needing reminding in the current journey, and then after the user is detected to leave the vehicle cabin, the vehicle-mounted terminal calls the fallen object detector to perform static/dynamic classification; after the detector of the lost object identifies the static type object, the detector of the lost object can further perform fine classification to determine the sub-type of the lost object, so that the detection precision of the lost object is further improved; in addition, if the lost object detector recognizes a dynamic object, it is possible to further perform living body detection, determine whether the lost object is a living animal, and further improve the detection accuracy of the lost object. In addition, when the human body detector detects that the user's picture appears in the video, the vehicle-mounted terminal can carry out frame skipping detection at intervals of preset time, so that the operation resource can be saved, and unnecessary object detection is avoided.
In order to better understand the method, an application example of the method for identifying the fallen object is set forth. In this application example, the method for identifying a lost object provided by the present application mainly includes: firstly, a semantic-based detection algorithm for objects left in the cabin; classifying the fallen articles by utilizing a deep neural network, and distinguishing the fallen articles needing alarming; and thirdly, classifying the animals in the cabin by using a living body detection technology, and distinguishing the animals left and the photos of the animals left.
The hardware composition of the application example can comprise: firstly, one or more color cameras/near infrared cameras are arranged in the vehicle cabin, the shooting area covers the positions of seats and the like in the vehicle cabin, and the arrangement positions of the cameras can be but are not limited to a reflector and the roof of the automobile. And secondly, the algorithm (a human body detector, a lost object detector, a living body detector, a static object classifier and the like) can be deployed on an embedded platform of the vehicle-mounted terminal or can be sent to a cloud server by using a communication module for processing. The user can set the type of the lost article needing to be warned by the mobile phone, namely, the detection function of the lost article can be set individually through an application program or a webpage.
The processing flow of the application example is described below with reference to fig. 4:
step S401 to step S402, the existence of a person in the vehicle cabin:
and sending video data shot by one or more cameras in the vehicle cabin into a human body detector, and judging whether a person is in the vehicle cabin. The human body detector may adopt a general deep neural network-based detector, such as RCNN, SSD, RetineNet, etc. If the person is in the vehicle, the vehicle-mounted terminal starts the fallen object detector to detect the object, and if the person is not in the vehicle, the fallen object detector is not started, sleeps for a period of time (for example, 30 seconds) and then restarts the human body detector to detect whether the person is in the vehicle.
Step S403, the vehicle cabin is unmanned and whether a predefined object with semantics exists in the vehicle cabin is detected:
if no person is in the vehicle cabin, video is input to the missing object detector for object detection, wherein the object detection categories include static object categories (including but not limited to cell phones, purses, glasses/sunglasses, laptops, keys) and dynamic object categories (including but not limited to cats and dogs). Wherein the user can perform custom setting for the item category that needs to be alarmed, and one of the setting interfaces based on the web page/application program can be as shown in fig. 5. The vehicle-mounted terminal establishes an alarm category list in the background according to the preset of the user. If the vehicle-mounted terminal detects that no predefined semantic object or category needing alarming exists in the video, the vehicle-mounted terminal does not give the alarming, and the vehicle-mounted terminal can sleep for a period of time (for example, 2 minutes) and then restart the detector for the objects left.
Further, the vehicle-mounted terminal detects that a predefined object with semantics exists inside the vehicle cabin:
in steps S404 to S405, if the object detected by the missing object detector belongs to the static object class, the in-vehicle terminal invokes a deep neural network-based fine classifier (corresponding to the static object classifier) to perform fine classification on the static object. For example, the detection result that the left object detector in the previous step can output is that the object is glasses, and since the left object detector often cannot accurately distinguish fine-category objects, the fine classifier is used for performing fine classification so that the glasses detected by the left object detector are myopia glasses or sunglasses, and whether to alarm or not is judged according to user settings; for example, the alarm category set by the user and needing to be reminded is glasses for myopia, and then after the vehicle-mounted terminal calls the fine classifier to recognize that the missing object is glasses for myopia, according to the alarm category preset by the user, the vehicle-mounted terminal can determine that the recognized glasses for myopia belong to the object category predefined by the user, and further send a 'please note' to the mobile phone of the user: and (4) an alarm reminding message such as that a user loses myopia glasses on the vehicle. Wherein the objects to be classified finely may be, but are not limited to, glasses/sunglasses, periodicals/books/photos, etc.
In steps S406 to S407, if the object detected by the missing object detector belongs to the dynamic object category, it is necessary to determine whether it is a living body or a screen on a magazine or a periodical. At this time, the in-vehicle terminal may call a deep neural network-based living body classifier or discriminate by employing a continuous video stream to observe whether it is moving. Among them, the use of the live body classifier is more advantageous because it can avoid the video interference discrimination result of some flat panel or the like. And if the vehicle-mounted terminal determines that the object is a living object, judging whether to alarm or not based on user setting.
It should be noted that, since the determination is performed based on video in the present application example, a person skilled in the art may also perform verification by using multiple frames of images, so as to further improve the accuracy and suppress the possibility of false alarm.
The above mentioned detector/classifier can be constructed using the following architecture and algorithms:
the human body detector can adopt a general detector architecture and an algorithm, such as fast-RCNN, SSD, YOLO or a human face detector (such as SSH) to detect a driver and a passenger;
the missing object detector can adopt a general detector architecture and an algorithm, such as a fast-RCNN, SSD, YOLO, and other detectors;
and the fine classifier and the living body classifier can be structures such as ResNet or NasNet and the like to design a secondary classifier or a multi-classifier.
In addition, on the system deployment of the application example: the installation position of the rearview camera of the automobile needs to ensure that the area to be analyzed enters a lens picture.
When detectors/classifiers such as a human body detector, a left object detector, a living body detector and a static object classifier are constructed, the collected images can be labeled based on collected image data (video frame segments) of left objects in the cabin under a large number of different illumination conditions, different vehicle types and different viewing angles, and the labeled images are divided into a training set, a testing set and a verification set.
Training of the human body detector, the fallen object detector, the living body detector and the static object classifier can be performed with reference to the following steps:
initializing deep neural network parameters.
Secondly, training the deep neural network in a training set by using the existing method.
Thirdly, training the deep neural network in a training set by using the improved method.
And fourthly, analyzing the improved model, and verifying that the performance characteristic of the improved model is consistent with the design expectation.
Designing a plurality of comparison experiments, further carrying out quantitative evaluation on the two training models on a test set and a verification set, and verifying the correctness of the proposed algorithm.
In the application example, a deep neural network can be designed for a hardware platform needing adaptation, and the computing power of the hardware platform can be guaranteed to support the operation of an algorithm.
Therefore, the application example replaces the non-visual and non-semantic lost article method with the deep learning method, identifies some potential objects to be analyzed by the object detection method, distinguishes the objects by the living body analysis and fine classification method, only alarms according to the needed categories, and can improve the detection precision and the recall rate of the lost articles.
It should be understood that, although the steps in the flowcharts of fig. 1 to 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1 to 5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the other steps or stages.
In one embodiment, as shown in fig. 6, there is provided an apparatus for identifying a fallen object, including:
the in-cabin video acquisition module 601 is used for acquiring a video shot aiming at the inside of the vehicle cabin after detecting that no person is in the vehicle cabin;
an object detection module 602, configured to input a video to a pre-constructed missing object detector, so that the missing object detector detects whether an image corresponding to an object that matches an object class predefined by a user is included in the video;
and an object identification module 603, configured to identify that the user has left the predefined object in the vehicle cabin if the left object detector detects that the video includes an image corresponding to an object that matches the object class predefined by the user.
In one embodiment, the apparatus further comprises a static object classification module for inputting the image to a pre-constructed static object classifier when the user-predefined object class includes a plurality of sub-classes under the static object class if the user-predefined object class belongs to the static object class, so that the static object classifier identifies the sub-class to which the object belongs among the plurality of sub-classes.
In one embodiment, the apparatus further comprises a living body detection module, configured to input an image to a pre-constructed living body detector if the object class predefined by the user belongs to the dynamic object class, so that the living body detector detects whether the object is a living body; if so, the identification object is a living animal.
In an embodiment, the living body detecting module is further configured to detect whether a motion characteristic of an object appears in the video; if yes, the image is input to the biopsy detector.
In one embodiment, the device further comprises an alarm reminding module, configured to output a missed object alarm reminding to the user if the object class to which the object belongs to an alarm object class predefined by the user.
In an embodiment, the above-mentioned warning reminding module is further configured to provide a left article setting interface in response to the user entering the cabin, and prompt the user to enter a warning object category for the current driving trip on the provided left article setting interface.
In one embodiment, the device further comprises an interval detection module, which is used for triggering and detecting whether a person exists in the vehicle cabin or not at intervals of preset time when the person exists in the vehicle cabin; the preset time is set according to the estimated time of the user leaving the vehicle cabin.
For the specific definition of the device for identifying the fallen object, reference may be made to the above definition of the method for identifying the fallen object, and details are not repeated here. The modules in the device for identifying the lost object can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the vehicle-mounted terminal, and can also be stored in a memory in the vehicle-mounted terminal in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a vehicle-mounted terminal is provided, and the internal structure thereof may be as shown in fig. 7. The vehicle-mounted terminal comprises a processor, a memory and a network interface which are connected through a system bus. Wherein, the processor of the vehicle-mounted terminal is used for providing calculation and control capability. The memory of the vehicle-mounted terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the vehicle-mounted terminal is used for storing data for identifying the lost object. The network interface of the vehicle-mounted terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method of identifying a missing object.
Those skilled in the art will appreciate that the structure shown in fig. 7 is only a block diagram of a part of the structure related to the present application, and does not constitute a limitation to the in-vehicle terminal to which the present application is applied, and a specific in-vehicle terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one embodiment, a vehicle-mounted terminal is provided, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the respective method embodiment as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of identifying a lost object, the method comprising:
after detecting that no person exists in the vehicle cabin, acquiring a video shot aiming at the interior of the vehicle cabin;
inputting the video to a pre-constructed fallen object detector so that the fallen object detector detects whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and if the lost object detector detects that the video contains images corresponding to objects matched with the object classes predefined by the user, identifying that the user loses the predefined objects in the vehicle cabin.
2. The method of claim 1, wherein after identifying that the user has left a predefined object in a vehicle cabin, the method further comprises:
if the user-predefined object category belongs to a static object category, inputting the image to a pre-constructed static object classifier when the user-predefined object category includes a plurality of sub-categories under the static object category, so that the static object classifier identifies the sub-category to which the object belongs among the plurality of sub-categories.
3. The method of claim 1, wherein after identifying that the user has left a predefined object in a vehicle cabin, the method further comprises:
if the object class predefined by the user belongs to a dynamic object class, inputting the image to a pre-constructed living body detector so that the living body detector detects whether the object is a living body;
if so, identifying the object as a living animal.
4. The method of claim 3, wherein the inputting the image to a pre-constructed liveness detector comprises:
detecting whether the motion characteristics of the object are presented in the video;
and if so, inputting the image into the living body detector.
5. The method according to any one of claims 1 to 4, further comprising:
and if the object class to which the object belongs to the alarm object class predefined by the user, outputting a missing object alarm prompt to the user.
6. The method of claim 5, wherein prior to outputting a missing object alert reminder to the user, the method further comprises:
and responding to the situation that the user enters the vehicle cabin, providing a left article setting interface and prompting the user to enter an alarm object category aiming at the driving travel at the time on the provided left article setting interface.
7. The method according to any one of claims 1 to 4, further comprising:
when a person is detected in the vehicle cabin, triggering and detecting whether the person is in the vehicle cabin at intervals of preset time; the preset time is set according to the estimated time of the user leaving the vehicle cabin.
8. An apparatus for identifying a lost object, the apparatus comprising:
the video acquisition module in the vehicle cabin is used for acquiring a video shot aiming at the interior of the vehicle cabin after detecting that no person is in the vehicle cabin;
the object detection module is used for inputting the video to a pre-constructed fallen object detector so as to enable the fallen object detector to detect whether the video contains an image corresponding to an object matched with an object class predefined by a user;
and the object identification module is used for identifying that the user loses the predefined object in the vehicle cabin if the lost object detector detects that the video contains an image corresponding to the object matched with the object type predefined by the user.
9. An in-vehicle terminal comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202011023368.1A 2020-09-25 2020-09-25 Method and device for identifying lost object, vehicle-mounted terminal and storage medium Pending CN111931734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011023368.1A CN111931734A (en) 2020-09-25 2020-09-25 Method and device for identifying lost object, vehicle-mounted terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011023368.1A CN111931734A (en) 2020-09-25 2020-09-25 Method and device for identifying lost object, vehicle-mounted terminal and storage medium

Publications (1)

Publication Number Publication Date
CN111931734A true CN111931734A (en) 2020-11-13

Family

ID=73334736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011023368.1A Pending CN111931734A (en) 2020-09-25 2020-09-25 Method and device for identifying lost object, vehicle-mounted terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111931734A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875839A (en) * 2018-06-28 2018-11-23 深圳市元征科技股份有限公司 Article reminding method, system and equipment and storage medium are lost in a kind of vehicle
CN109606065A (en) * 2019-01-18 2019-04-12 广州小鹏汽车科技有限公司 Environment control method, device and automobile
CN109766804A (en) * 2018-12-28 2019-05-17 百度在线网络技术(北京)有限公司 Item identification method, device, equipment and storage medium based on vehicle-mounted scene
CN110390804A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Based reminding method, system and the vehicle that interior article and living body lose
US20190354875A1 (en) * 2018-05-18 2019-11-21 Objectvideo Labs, Llc Machine learning for home understanding and notification
CN110866451A (en) * 2019-10-22 2020-03-06 中国第一汽车股份有限公司 In-vehicle life body detection method, device and system and storage medium
CN111415347A (en) * 2020-03-25 2020-07-14 上海商汤临港智能科技有限公司 Legacy object detection method and device and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390804A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 Based reminding method, system and the vehicle that interior article and living body lose
US20190354875A1 (en) * 2018-05-18 2019-11-21 Objectvideo Labs, Llc Machine learning for home understanding and notification
CN108875839A (en) * 2018-06-28 2018-11-23 深圳市元征科技股份有限公司 Article reminding method, system and equipment and storage medium are lost in a kind of vehicle
CN109766804A (en) * 2018-12-28 2019-05-17 百度在线网络技术(北京)有限公司 Item identification method, device, equipment and storage medium based on vehicle-mounted scene
CN109606065A (en) * 2019-01-18 2019-04-12 广州小鹏汽车科技有限公司 Environment control method, device and automobile
CN110866451A (en) * 2019-10-22 2020-03-06 中国第一汽车股份有限公司 In-vehicle life body detection method, device and system and storage medium
CN111415347A (en) * 2020-03-25 2020-07-14 上海商汤临港智能科技有限公司 Legacy object detection method and device and vehicle

Similar Documents

Publication Publication Date Title
EP3965082B1 (en) Vehicle monitoring system and vehicle monitoring method
US10115029B1 (en) Automobile video camera for the detection of children, people or pets left in a vehicle
US10417486B2 (en) Driver behavior monitoring systems and methods for driver behavior monitoring
CN111415347B (en) Method and device for detecting legacy object and vehicle
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
US20170297523A1 (en) Automatic passenger airbag switch
CN111524332A (en) Responding to in-vehicle environmental conditions
JP7288097B2 (en) Seat belt wearing detection method, device, electronic device, storage medium and program
CN108734056A (en) Vehicle environmental detection device and detection method
CN111986453A (en) Reminder system, vehicle comprising same, and corresponding method and medium
CN115909537A (en) Vehicle data collection system and method of use
CN114332941A (en) Alarm prompting method and device based on riding object detection and electronic equipment
US20210089798A1 (en) Systems and methods of preventing removal of items from vehicles by improper parties
CN111862529A (en) Alarm method and equipment
CN111931734A (en) Method and device for identifying lost object, vehicle-mounted terminal and storage medium
CN111422200B (en) Method and device for adjusting vehicle equipment and electronic equipment
CN112149482A (en) Method, device and equipment for detecting on-duty state of driver and computer storage medium
Lupinska-Dubicka et al. Vehicle passengers detection for onboard eCall-compliant devices
CN112937479A (en) Vehicle control method and device, electronic device and storage medium
JP2018060447A (en) On-vehicle storage device
CN113997898B (en) Living body detection method, apparatus, device and storage medium
US20230107819A1 (en) Seat Occupancy Classification System for a Vehicle
CN111753581A (en) Target detection method and device
CN112208475A (en) Safety protection system for vehicle occupants, vehicle and corresponding method and medium
RU2748780C1 (en) Method and system for detecting alarm events occurring on vehicle during cargo transportation in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication