CN110543850A - Target detection method and device and neural network training method and device - Google Patents

Target detection method and device and neural network training method and device Download PDF

Info

Publication number
CN110543850A
CN110543850A CN201910816348.0A CN201910816348A CN110543850A CN 110543850 A CN110543850 A CN 110543850A CN 201910816348 A CN201910816348 A CN 201910816348A CN 110543850 A CN110543850 A CN 110543850A
Authority
CN
China
Prior art keywords
target
information
sensors
feature
environmental information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910816348.0A
Other languages
Chinese (zh)
Other versions
CN110543850B (en
Inventor
张文蔚
周辉
王哲
石建萍
吕健勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN201910816348.0A priority Critical patent/CN110543850B/en
Publication of CN110543850A publication Critical patent/CN110543850A/en
Application granted granted Critical
Publication of CN110543850B publication Critical patent/CN110543850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a target detection method and apparatus, and a neural network training method and apparatus, wherein the target detection method includes: respectively acquiring environmental information of the intelligent equipment through N sensors arranged on the intelligent equipment, wherein N is a positive integer greater than or equal to 2; respectively extracting the characteristics of a target from the environmental information acquired by M sensors in the N sensors to obtain the target characteristic information of the environmental information acquired by the M sensors, wherein M is a positive integer less than or equal to N; fusing target characteristic information of environmental information acquired by L sensors in the M sensors to obtain target fusion characteristics, wherein L is a positive integer less than or equal to M; and determining the detection result of the target according to the target fusion characteristics. The embodiment of the disclosure can improve the accuracy and reliability of target detection.

Description

Target detection method and device and neural network training method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a target detection method and apparatus, and a neural network training method and apparatus.
background
automatic driving is an important research direction in artificial intelligence technology. In order to improve the reliability of an autonomous driving system, it is necessary to improve the system's perception of objects in the environment (e.g., vehicles, pedestrians, etc.). An automatic driving system may detect environmental information through a sensor and perform detection and tracking of an object based on detection data of the sensor.
disclosure of Invention
the present disclosure provides a technical scheme for target detection.
according to an aspect of the present disclosure, there is provided an object detection method including: respectively acquiring environmental information of the intelligent equipment through N sensors arranged on the intelligent equipment, wherein N is a positive integer greater than or equal to 2; respectively extracting the characteristics of a target from the environmental information acquired by M sensors in the N sensors to obtain the target characteristic information of the environmental information acquired by the M sensors, wherein M is a positive integer less than or equal to N; fusing target characteristic information of the environmental information acquired by L sensors in the M sensors to obtain target fusion characteristics, wherein L is a positive integer less than or equal to M; and determining the detection result of the target according to the target fusion characteristics.
In one possible implementation, the method further includes: and determining the detection result of the target according to the target characteristic information of the environmental information acquired by any one of the N sensors.
In one possible implementation, the method further includes: screening out environmental information acquired by the N-M sensors according to a first preset condition; and screening out target characteristic information of the environmental information acquired by the M-L sensors according to a second preset condition.
In one possible implementation, the first preset condition includes at least one of: the sensor is in an abnormal working state, and the environmental information collected by the sensor does not meet the quality requirement.
In a possible implementation manner, fusing target feature information of environmental information acquired by L sensors of the M sensors to obtain a target fusion feature, including: determining a feature fusion weight corresponding to each sensor of the L sensors; and performing feature fusion according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features.
In one possible implementation manner, determining the feature fusion weight corresponding to each sensor of the L sensors includes: and determining the feature fusion weight corresponding to each sensor in the L sensors according to the meteorological condition and/or the light condition of the environment where the intelligent equipment is located.
In one possible implementation, the detection result includes a correlation prediction result; the environmental information collected by the L sensors at least includes L first environmental information at a first time and L second environmental information at a second time after the first time, and the determining of the detection result of the target according to the target fusion feature includes:
And predicting the relevance between the first target and the second target according to the target fusion characteristics of the first target in the L pieces of first environment information and the target fusion characteristics of the second target in the L pieces of second environment information to obtain the relevance prediction result of the first target and the second target.
in one possible implementation, the associated prediction result includes at least one of a probability that the first target and the second target are the same target, a probability that the first target is an end target, a probability that the second target is a start target, and a confidence of the first target and the second target,
Wherein the destination object represents an object in the first environment information but not in the second environment information, and the start object represents an object in the second environment information but not in the first environment information.
In a possible implementation manner, the performing feature extraction on the target respectively to obtain the target feature information of the environmental information acquired by the M sensors in the N sensors includes: respectively carrying out target detection on the environmental information acquired by the M sensors, and determining the area information of the target in the environmental information of the M sensors; and respectively extracting the characteristics of the information of each area to obtain target characteristic information of the environmental information of the M sensors.
In a possible implementation manner, the M sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, the characteristic extraction is performed on each area information respectively to obtain target characteristic information of the environment information of the M sensors, and the method includes: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
In a possible implementation manner, fusing target feature information of environmental information acquired by L sensors of the M sensors to obtain a target fusion feature, including: and performing any one of splicing, adding and attention weight-based adding processing on the target characteristic information of the environmental information acquired by the L sensors to obtain the target fusion characteristic.
In a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the N sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
according to an aspect of the present disclosure, there is provided a neural network training method, including: obtaining a training data set, the training data set comprising: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2; the method comprises the steps that a neural network-based feature extraction module respectively extracts the features of a target from environmental information collected by each sensor to obtain target feature information of each environmental information; fusing each target feature information based on the feature fusion module of the neural network to obtain a target fusion feature; the prediction module based on the neural network respectively carries out target detection according to the target characteristic information and the target fusion characteristics to obtain X +1 detection results of the targets; and adjusting the network parameters of the neural network according to the difference between the X +1 detection result of the target and the labeled information.
in a possible implementation manner, the environment information collected by the X sensors at least includes X third environment information at a third time and X fourth environment information at a fourth time after the third time, and the training data set includes association information between a third target of the X third environment information and a fourth target of the X fourth environment information, where the method further includes: a prediction module based on a neural network performs relevance prediction according to each target feature information and target fusion feature of the X pieces of third environment information and each target feature information and target fusion feature of the X pieces of fourth environment information to obtain a relevance prediction result of the third target and the fourth target; and adjusting the network parameters of the neural network according to the difference between the correlation prediction results and the correlation information of the third target and the fourth target.
In one possible implementation, the associated prediction result of the third target and the fourth target includes at least one of a probability that the third target and the fourth target are the same target, a probability that the third target is an end target, a probability that the fourth target is a start target, and a confidence of the third target and the fourth target, wherein the end target represents a target in the third environment information but not in the fourth environment information, and the start target represents a target in the fourth environment information but not in the third environment information.
in a possible implementation manner, the feature extraction module based on the neural network respectively performs feature extraction on the target to obtain target feature information of each environmental information, including: respectively carrying out target detection on the environmental information acquired by the X sensors, and determining the area information of the target in the environmental information of the X sensors; and respectively extracting the characteristics of the information of each area to obtain target characteristic information of the environmental information of the X sensors.
in a possible implementation manner, the X sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, the feature extraction is performed on each area information respectively to obtain target feature information of the environment information of the X sensors, which includes: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
In a possible implementation manner, the fusing information of each target feature by the feature fusion module based on the neural network to obtain the target fusion feature includes: and performing any one of splicing, adding and attention weight-based adding processing on the target characteristic information of the environmental information acquired by the X sensors to obtain the target fusion characteristic.
In a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the X sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
According to an aspect of the present disclosure, there is provided an object detection apparatus including: the information acquisition module is used for respectively acquiring the environmental information of the intelligent equipment through N sensors arranged on the intelligent equipment, wherein N is a positive integer greater than or equal to 2; the extraction module is used for respectively extracting the characteristics of targets from the environmental information acquired by M sensors in the N sensors to obtain the target characteristic information of the environmental information acquired by the M sensors, wherein M is a positive integer less than or equal to N; the fusion module is used for fusing target characteristic information of the environmental information acquired by L sensors in the M sensors to obtain target fusion characteristics, wherein L is a positive integer less than or equal to M; and the first result determining module is used for determining the detection result of the target according to the target fusion characteristic.
In one possible implementation, the apparatus further includes: and the second result determining module is used for determining the detection result of the target according to the target characteristic information of the environmental information acquired by any one of the N sensors.
In one possible implementation, the apparatus further includes: the first screening module is used for screening out the environmental information acquired by the N-M sensors according to a first preset condition; and the second screening module is used for screening target characteristic information of the environmental information acquired by the M-L sensors according to a second preset condition.
In one possible implementation, the first preset condition includes at least one of: the sensor is in an abnormal working state, and the environmental information collected by the sensor does not meet the quality requirement.
In one possible implementation, the fusion module includes: the weight determining submodule is used for determining the feature fusion weight corresponding to each sensor in the L sensors; and the first fusion submodule is used for carrying out feature fusion according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features.
In one possible implementation, the weight determining submodule is configured to: and determining the feature fusion weight corresponding to each sensor in the L sensors according to the meteorological condition and/or the light condition of the environment where the intelligent equipment is located.
In one possible implementation, the detection result includes a correlation prediction result; the environmental information collected by the L sensors at least includes L first environmental information at a first time and L second environmental information at a second time after the first time, and the first result determination module includes: and the association prediction sub-module is configured to predict, according to the target fusion feature of the first target in the L pieces of first environment information and the target fusion feature of the second target in the L pieces of second environment information, the association between the first target and the second target, so as to obtain an association prediction result of the first target and the second target.
In one possible implementation, the associated prediction result includes at least one of a probability that the first target and the second target are the same target, a probability that the first target is an end target, a probability that the second target is a start target, and a confidence of the first target and the second target, wherein the end target represents a target in the first environment information but not in the second environment information, and the start target represents a target in the second environment information but not in the first environment information.
in one possible implementation, the extraction module includes: the detection submodule is used for respectively carrying out target detection on the environmental information acquired by the M sensors and determining the area information where the target is located in the environmental information of the M sensors; and the extraction submodule is used for respectively extracting the characteristics of the information of each area to obtain the target characteristic information of the environmental information of the M sensors.
In one possible implementation, the M sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, and the extraction sub-module is configured to: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
In one possible implementation, the fusion module includes: and the second fusion submodule is used for splicing and adding the target characteristic information of the environmental information acquired by the L sensors and performing addition processing based on attention weight to obtain the target fusion characteristic.
in a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the N sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
According to an aspect of the present disclosure, there is provided a neural network training apparatus including: a dataset acquisition module to acquire a training dataset, the training dataset comprising: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2; the characteristic extraction module is used for respectively extracting the characteristics of the target from the environmental information acquired by each sensor to obtain the target characteristic information of each environmental information; the characteristic fusion module is used for fusing the characteristic information of each target to obtain target fusion characteristics; the prediction module is used for respectively carrying out target detection according to the target characteristic information and the target fusion characteristic to obtain X +1 detection results of the target; and the parameter adjusting module is used for adjusting the network parameters of the neural network according to the difference between the X +1 detection result of the target and the labeled information.
In a possible implementation manner, the environmental information acquired by the X sensors at least includes X third environmental information at a third time and X fourth environmental information at a fourth time after the third time, the training data set includes correlation information between a third target of the X third environmental information and a fourth target of the X fourth environmental information, and the prediction module is further configured to perform correlation prediction according to each target feature information and target fusion feature of the X third environmental information and each target feature information and target fusion feature of the X fourth environmental information, so as to obtain a correlation prediction result of the third target and the fourth target; the parameter adjusting module is further configured to adjust a network parameter of the neural network according to a difference between the correlation prediction result and the correlation information of the third target and the fourth target.
In one possible implementation, the associated prediction result of the third target and the fourth target includes at least one of a probability that the third target and the fourth target are the same target, a probability that the third target is an end target, a probability that the fourth target is a start target, and a confidence of the third target and the fourth target, wherein the end target represents a target in the third environment information but not in the fourth environment information, and the start target represents a target in the fourth environment information but not in the third environment information.
In one possible implementation, the feature extraction module includes: the target detection submodule is used for respectively carrying out target detection on the environmental information acquired by the X sensors and determining the area information where the target is located in the environmental information of the X sensors; and the characteristic extraction submodule is used for respectively extracting the characteristics of the information of each area to obtain the target characteristic information of the environmental information of the X sensors.
In one possible implementation, the X sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, and the feature extraction sub-module is configured to: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
in one possible implementation, the feature fusion module includes: and the feature fusion submodule is used for splicing and adding the target feature information of the environmental information acquired by the X sensors and performing addition processing based on attention weight to obtain the target fusion feature.
In a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the X sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
according to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored by the memory to perform the above-described object detection method.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described object detection method.
In the embodiment of the disclosure, the target characteristic information is extracted according to the environmental information acquired by the multiple sensors arranged on the intelligent device, and the detection result of the target is determined through the target fusion characteristics of the multiple target characteristic information, so that the precision and the reliability of target detection are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
fig. 1 shows a flow diagram of a target detection method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating an application example of a processing procedure of the object detection method according to the embodiment of the present disclosure.
Fig. 3 shows a flow diagram of a neural network training method according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure.
fig. 5 illustrates a block diagram of a neural network training device in accordance with an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an object detection method according to an embodiment of the present disclosure, as shown in fig. 1, the object detection method includes:
In step S11, acquiring environmental information of the smart device by N sensors disposed on the smart device, where N is a positive integer greater than or equal to 2;
in step S12, performing target feature extraction on the environmental information acquired by M sensors of the N sensors, respectively, to obtain target feature information of the environmental information acquired by the M sensors, where M is a positive integer less than or equal to N;
In step S13, fusing target feature information of the environmental information acquired by L sensors of the M sensors to obtain a target fusion feature, where L is a positive integer less than or equal to M;
In step S14, a detection result of the target is determined according to the target fusion feature.
In a possible implementation manner, the object detection method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
in one possible implementation, the smart device may include a device with computing processing capabilities, and may have sensing, determining, and executing functions. The smart device may, for example, comprise any of a smart vehicle, a smart robot, a smart robotic arm. The present disclosure does not limit the type of smart device.
In a possible implementation manner, the smart device may be provided with a plurality of sensors for acquiring environmental information of an environment where the smart device is located, for example, environmental information such as road conditions outside the smart vehicle, vehicles, pedestrians, and the like. Wherein, the plurality of sensors may comprise at least one of a camera, a laser radar and a millimeter wave radar, and the number of each sensor may be one or more. At least two sensors are disposed at different locations of the smart device. The present disclosure does not limit the number, type, and placement of the sensors on the smart device.
In a possible implementation manner, N sensors may be provided on the smart device, where N is a positive integer greater than or equal to 2. In step S11, the environment information of the smart device is respectively collected by N sensors disposed on the smart device, so as to obtain N pieces of environment information, such as image information collected by a camera, point cloud information collected by a laser radar, and the like.
in one possible implementation manner, the environmental information collected by M sensors that satisfy the condition may be determined from the environmental information collected by N sensors, where M is a positive integer less than or equal to N. If the environmental information acquired by the N sensors meets the preset condition, M is equal to N; and if the environmental information collected by part of the N sensors does not meet the preset condition, M is smaller than N. Therefore, through the redundant configuration of the multiple sensors, the characteristics can be extracted according to all or part of the environmental information collected by the sensors so as to realize target detection, and the reliability and the accuracy of the target detection can be improved.
in one possible implementation manner, in step S12, feature extraction is performed on the target in the environmental information collected by the M sensors, so as to obtain target feature information of the M environmental information. In the case that the smart device is a smart vehicle, the target to be analyzed may include, but is not limited to, one or more dynamic targets or static targets in the environment, such as vehicles, pedestrians, road signs, traffic signs, etc., and the present disclosure does not limit the specific types and numbers of the targets.
In a possible implementation manner, a target in the environment information can be detected through a target detection network, and an area where the target in the environment information is located is determined; and performing feature extraction on the region where the target is located through a feature extraction network to obtain target feature information. The target detection network and the feature extraction network may both include a convolutional neural network, and the network structure of the target detection network and the feature extraction network is not limited by the disclosure.
In one possible implementation manner, in step S13, target feature information of the environmental information collected by L sensors of the M sensors may be fused to obtain a target fusion feature, where L is a positive integer less than or equal to M. That is, the target feature information of the L pieces of environment information is subjected to processing such as splicing and adding, so as to obtain the target fusion feature.
In one possible implementation manner, if the environmental information acquired by the N sensors and the target characteristic information of each piece of environmental information both satisfy a preset condition, L is equal to M and equal to N; if the target characteristic information of the environmental information acquired by the M sensors meets the preset condition, L is equal to M; and if the target characteristic information of the environmental information acquired by part of the M sensors does not meet the preset condition, L is smaller than M. Therefore, through the redundant configuration of the multiple sensors, the target characteristic information according to the environmental information collected by all or part of the sensors can be fused to realize target detection, and the reliability and the accuracy of the target detection can be improved.
In one possible implementation, the detection result of the target may be determined according to the target fusion feature in step S14. For the environmental information at a moment, the target fusion characteristics of the environmental information at the moment can be subjected to target detection, and a more accurate target detection result is determined. For the environment information at a plurality of times, for example, the first environment information at the first time and the second environment information at the second time after the first time, the relevance between the targets in the first environment information and the second environment information can be predicted according to the target fusion feature of the first environment information and the target fusion feature of the second environment information, and the relevance measurement result between the targets can be determined, so that the target tracking can be realized. The present disclosure does not limit the manner of detection and the type of detection result.
according to the target detection method provided by the embodiment of the disclosure, the target characteristic information can be extracted according to the environmental information acquired by the multiple sensors arranged on the intelligent equipment, and the detection result of the target is determined through the target fusion characteristics of the multiple target characteristic information, so that the precision and the reliability of target detection are improved.
In a possible implementation manner, before step S12, the object detection method may further include:
And screening out the environmental information collected by the N-M sensors according to a first preset condition.
For example, in a case where N sensors of the smart device respectively collect the environmental information, the collected environmental information may be screened to screen out inaccurate or low-quality environmental information. A first preset condition may be set for screening out the environmental information, which may for example include at least one of: the sensor is in an abnormal working state, and the environmental information collected by the sensor does not meet the quality requirement.
In a possible implementation manner, if the sensor is in an abnormal working state, the environmental information acquired by the sensor may be inaccurate, and the environmental information acquired by the sensor can be screened out; if the environmental information collected by the sensor does not meet the quality requirement, the target in the environmental information may not be detected, and the environmental information collected by the sensor may be screened out. Therefore, the environmental information acquired by the N-M sensors can be screened out from the environmental information acquired by the N sensors, and the environmental information acquired by the M sensors can be determined. It should be understood that the first preset condition can be set by those skilled in the art according to practical situations, and the disclosure is not limited thereto.
By the method, the interference of environmental information which does not conform to the conditions on the detection result can be avoided, the accuracy of the detection result is improved, and meanwhile, the calculated amount in the target detection process is reduced.
In one possible implementation manner, the environmental information collected by the M sensors may be subjected to feature extraction of the target in step S12. Wherein, the step S12 may include:
Respectively carrying out target detection on the environmental information acquired by the M sensors, and determining the area information of the target in the environmental information of the M sensors;
and respectively extracting the characteristics of the information of each area to obtain target characteristic information of the environmental information of the M sensors.
For example, the target in each piece of environmental information may be detected by the target detection network, and the area information of the area where the target is located in the environmental information may be determined. The target detection network can be a neural network, such as a convolutional neural network or a cyclic neural network or a deep neural network, and the target detection is performed on the input image through the neural network to obtain two-dimensional or three-dimensional area information corresponding to the target in the image. The present disclosure is not limited as to the type of target detection network.
In one possible implementation, the respective environment information may be processed by the same object detection network. For different environment information, the target detection network may directly detect the area information where the target is located, and may also need to convert.
For example, when a two-dimensional object detection network is used to detect an object in image information and point cloud information, the object detection network may directly detect a two-dimensional rectangular frame from the image information, and determine an area in the rectangular frame as area information where the object in the image information is located; for the point cloud information, the target detection network can detect a two-dimensional rectangular frame, the two-dimensional rectangular frame needs to be projected into a three-dimensional view cone, and an area in the view cone is determined as area information where the target is located, so that the next processing can be performed.
For example, when a three-dimensional target detection network is used to detect a target in image information and point cloud information, the target detection network can directly detect a three-dimensional view cone from the point cloud information and determine an area in the view cone as area information where the target is located; for the image information, the target detection network can detect a three-dimensional vertebral body, the three-dimensional vertebral body needs to be projected into a two-dimensional rectangular frame, and the area in the rectangular frame is determined as the area information of the target so as to be convenient for the next processing. The present disclosure does not limit the selection and training of the target detection network and the specific processing manner of the region frame.
in this way, the detected area information can be made to correspond so as to improve the consistency of detection.
in a possible implementation manner, after obtaining the region information where the target in each environmental information is located, feature extraction may be performed on each region information. Wherein, the feature extraction can be respectively carried out through the feature extraction network corresponding to each environment information. For example, for image information, a convolutional neural network can be used for feature extraction; for point cloud information, a deep convolutional neural network for a point cloud data format can be adopted for feature extraction. For other types of sensing data, corresponding feature extraction networks can be designed for feature extraction. The present disclosure does not limit the network structure of the feature extraction network corresponding to each category of sensing data. In this way, features can be extracted by the feature extraction network adapted to different categories, thereby extracting information in the sensed data more efficiently.
In one possible implementation, a pooling layer may be provided at the end of each feature extraction network, and the extracted features are converted into feature vectors (with a vector length of D, for example, D512) by pooling, and the feature vectors are used as target feature information of each environment information.
In one possible implementation, the M sensors include a laser radar, the environmental information collected by the laser radar includes point cloud information, and the area information of the point cloud information includes area point cloud information.
the step of extracting the features of the information of each region to obtain the target feature information of the environmental information of the M sensors may include: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
That is, for the point cloud information collected by the laser radar, the three-dimensional view cone of the target in the point cloud information can be detected through the target detection network and used as the regional point cloud information. And performing depth feature extraction on the regional point cloud information through a deep neural network to obtain the target point cloud depth feature of the point cloud information. The present disclosure is not limited to a particular type of deep neural network.
By the method, the depth characteristics of the point cloud information can be extracted, and richer information can be obtained.
In a possible implementation manner, before step S13, the object detection method may further include:
And screening out target characteristic information of the environmental information acquired by the M-L sensors according to a second preset condition.
for example, after obtaining the target characteristic information of the collected environmental information of the M sensors, the target characteristic information may be screened to screen out inaccurate or low-quality target characteristic information. A second preset condition may be set for screening out target characteristic information. The second preset condition may include, for example, that the target feature information does not satisfy the requirement, that the target feature information does not belong to the feature category to be analyzed, and the like. It should be understood that the second preset condition can be set by those skilled in the art according to practical situations, and the disclosure is not limited thereto.
by the method, the target characteristic information of the environmental information acquired by the M-L sensors can be screened out according to the second preset condition, so that the interference of the target characteristic information which does not conform to the condition on the detection result is avoided, the accuracy of the detection result is improved, and the calculated amount in the target detection process is reduced.
In one possible implementation, after obtaining the target characteristic information of the environmental information collected by the L sensors, the fusion may be performed in step S13. Wherein, the step S13 may include:
Determining a feature fusion weight corresponding to each sensor of the L sensors;
And performing feature fusion according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features.
For example, feature fusion weights may be set for each sensor of the smart device according to information such as importance of each sensor in the smart device, an environmental condition where the smart device is located, and accuracy of each sensor, so that target feature information of environmental information acquired by each sensor contributes differently to fusion features. After the feature fusion weights of the L sensors are determined, feature fusion can be performed according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features. In this way, the accuracy of the detection result can be improved.
In one possible implementation manner, the step of determining the feature fusion weight corresponding to each sensor of the L sensors may include: and determining the feature fusion weight corresponding to each sensor in the L sensors according to the meteorological condition and/or the light condition of the environment where the intelligent equipment is located.
For example, the feature fusion weight of each sensor can be determined according to the environmental condition of the smart device, such as weather conditions, light conditions, and the like. For example, when the weather condition is sunny, the feature fusion weight corresponding to the camera can be set to be larger; in rainy and foggy weather, the feature fusion weight corresponding to the camera can be set to be smaller. When the light condition is bright (such as daytime), the feature fusion weight corresponding to the camera can be set to be larger, and the feature fusion weight corresponding to the laser radar can be set to be smaller; when the light condition is dim (for example, at night), the feature fusion weight corresponding to the camera can be set to be smaller, and the feature fusion weight corresponding to the laser radar can be set to be larger. It should be understood that, those skilled in the art can set the feature fusion weight corresponding to each sensor according to the actual situation, and the present disclosure does not limit this.
By the method, the feature fusion weight of each sensor can be set according to the environmental condition, so that the importance degree of each sensor under different conditions is reflected, and the accuracy of the detection result is further improved.
In one possible implementation, step S13 may include:
And performing any one of splicing, adding and attention weight-based adding processing on the target characteristic information of the environmental information acquired by the L sensors to obtain the target fusion characteristic.
For example, the target feature information may be fused in various ways, such as concatenation, addition, and attention-weight-based addition. For example, when the target feature information is a D-dimensional feature vector, L feature vectors of the environmental information may be spliced to obtain an L × D-dimensional feature vector; and performing point-wise convolution (in English, point-wise convolution) on the L multiplied by D dimensional feature vector according to a preset weight, so that the dimension of the output vector is the same as that of the original feature vector, and the target fusion feature is obtained. The preset weight may be the same as or different from the feature fusion weight, and the specific value of the preset weight is not limited in this disclosure.
In a possible implementation manner, the target feature information of the L pieces of environment information may be directly added to obtain the target fusion feature. In a possible implementation manner, the attention weight of each target feature information may be determined according to the attention mechanism, and the target fusion features are obtained by adding each target feature information based on the attention weight.
By the method, the target fusion characteristics can be obtained, the target detection is carried out according to the target fusion characteristics, and the reasoning precision of the system can be obviously improved.
In a possible implementation manner, in the case that there are multiple targets in the environment information collected by the L sensors, the correspondence between the targets may be determined according to area information of the targets in the environment information, geographic positions of the targets, and the like in each piece of environment information. For example, a target in which the degree of overlap between the region information in each piece of environment information is greater than or equal to a threshold is determined as a corresponding target; and determining the target with the distance between the geographic positions in the various environmental information smaller than or equal to a threshold value as the corresponding target. The present disclosure does not limit the manner of determining the correspondence relationship between the objects in each piece of environmental information.
In a possible implementation manner, target feature information of a corresponding target in each piece of environment information may be fused to obtain a plurality of target fusion features, and then the detection result of each target is determined according to the plurality of target fusion features.
in a possible implementation manner, the target detection method may further include:
And determining the detection result of the target according to the target characteristic information of the environmental information acquired by any one of the N sensors.
For example, environmental information collected by N sensors of the smart device and/or target characteristic information of each piece of environmental information may have defects, such as the sensors being in an abnormal operating state, the environmental information collected by the sensors not meeting quality requirements, the target characteristic information not meeting requirements, and the like. In the case where the sensor is a camera, the camera may not work normally due to damage, the quality of the acquired image information may be poor due to the presence of fog or dirt, the target feature information extracted based on the image information may be inaccurate in a night environment, and the like.
under the condition, target characteristic information of the environmental information acquired by any sensor meeting the requirements can be adopted for target detection to obtain a target detection result; and target characteristic information fusion of partial sensors in the L sensors meeting the requirements can be adopted and target detection is carried out.
By the method, when a certain sensor fails, target characteristic information or fusion characteristics of the sensor which still works can be used for target detection, and the reliability of the system is improved.
in one possible implementation, the detection result includes a correlation prediction result; the environmental information collected by the L sensors at least comprises L first environmental information at a first moment and L second environmental information at a second moment after the first moment. Wherein, the step S14 may include:
And predicting the relevance between the first target and the second target according to the target fusion characteristics of the first target in the L pieces of first environment information and the target fusion characteristics of the second target in the L pieces of second environment information to obtain the relevance prediction result of the first target and the second target.
For example, target tracking may be performed for targets in the environmental information at multiple times. For the M pieces of first environment information at the first time and the M pieces of second environment information at the second time, in step S12, feature extraction of the target may be performed on each piece of first environment information and each piece of second environment information, so as to obtain target feature information of the M pieces of first environment information and the M pieces of second environment information; in step S13, the target feature information of the L pieces of first environment information is fused, and the target feature information of the L pieces of second environment information is fused, so as to obtain the target fusion feature of the first target in the L pieces of first environment information and the target fusion feature of the second target in the L pieces of second environment information.
In one possible implementation manner, in step S14, the relevance between the first target and the second target may be predicted through the relevance prediction network, for example, whether the first target and the second target are the same target, the credibility of the first target and the second target, and the like, so as to obtain the relevance prediction result of the first target and the second target. The associative prediction network may comprise, for example, a convolutional neural network, and the specific structure of the associative prediction network is not limited by the present disclosure.
In one possible implementation, the associated prediction result includes at least one of a probability that the first target and the second target are the same target, a probability that the first target is an end target, a probability that the second target is a start target, and a confidence of the first target and the second target,
Wherein the destination object represents an object in the first environment information but not in the second environment information, and the start object represents an object in the second environment information but not in the first environment information. For example, in an autonomous driving scenario, the vehicle object a may appear in the first environmental information at a first time and disappear in the second environmental information at a second time, the vehicle object a is an end point object; the vehicle object B may not be present in the first environment information at the first time and may be present in the second environment information at the second time, the vehicle object B being the origin object.
In a possible implementation manner, the association prediction result may include an association probability that the first target and the second target are the same target, and if the association probability is greater than or equal to a preset threshold, the first target and the second target may be considered as the same target; conversely, if the association probability is less than a preset threshold, the first target and the second target may be considered not to be the same target.
In one possible implementation, the associated prediction result may include confidence levels of the first target and the second target, i.e., confidence levels of the first target and the second target. If the confidence is greater than or equal to a preset confidence threshold, the first target or the second target can be considered as a real target; otherwise, if the confidence is smaller than the preset confidence threshold, the first target or the second target is considered as a false target, and the target can be screened out from the detection result.
In one possible implementation, the associated prediction result may include a probability that the first target is an endpoint target, which may represent a target in the first environmental information but not in the second environmental information, such as a vehicle leaving the collection area at the second time. If the probability is greater than or equal to a preset endpoint probability threshold, the first target may be considered an endpoint target; conversely, if the probability is less than a preset endpoint probability threshold, the first target may be considered to be not an endpoint target.
In one possible implementation, the associated prediction result may include a probability that the second target is an origin target, which may represent a target in the second environmental information but not in the first environmental information, such as a vehicle present in the collection area at the second time. If the probability is greater than or equal to a preset starting point probability threshold, the second target can be considered as a starting point target; conversely, if the probability is less than a preset starting point probability threshold, the second target may be considered not to be a starting point target.
In a possible implementation manner, in order to better introduce global information, a sorting mechanism may be adopted to process the preliminary associated prediction result and then perform linear programming. For example, a softmax function is used for the preliminary correlation prediction result in two directions to obtain two feature maps, the two feature maps are combined together through operations such as addition, and linear programming is performed to obtain a final correlation prediction result. The present disclosure does not limit the specific manner of linear programming. By introducing the global information, the common characteristic information in each environment information can be better utilized, and the accuracy of the correlation prediction is improved.
In this way, the relevance between targets in the environmental information at different times can be predicted, thereby realizing target tracking.
Fig. 2 is a schematic diagram illustrating an application example of a processing procedure of the object detection method according to the embodiment of the present disclosure. As shown in fig. 2, the method may include a target detection step 21, a feature extraction and fusion step 22, an association detection step 23, and a linear programming step 24.
As shown in fig. 2, the sensor includes a camera and a lidar for explanation, and the environment information collected by the camera and the lidar is image information and point cloud information, respectively.
In the target detection step 21, the image information 212 at the first time and the second time and the point cloud information 213 at the first time and the second time may be respectively input into the target detection network 211 for detection, and the area where the target in the image information 212 and the point cloud information 213 is located is determined, and referring to the area frames in the image information 212 and the point cloud information 213, the area information of the area frame position is captured. The object detection network 211 is a two-dimensional or three-dimensional object detection network, and can detect a two-dimensional or three-dimensional region frame. The target detection network can adopt a two-dimensional or three-dimensional target detection neural network in the related technology, and can detect two-dimensional or three-dimensional area information.
In the feature extraction and fusion step 22, the region information of each target can be input into the corresponding feature extraction network for feature extraction. As shown in fig. 2, area image information of an object in the image information may be input to the first feature extraction network 221; the regional point cloud information of the target in the point cloud information is input to the second feature extraction network 222.
The first feature extraction network 221 may be, for example, a convolutional neural network based on VGG16, such as a convolutional layer 1-5 (conv1-conv5) and a cross-layer pooling layer (skip-pooling) including VGG 16. By superimposing and pooling the feature maps obtained from the 1-5 convolutional layers of VGG16, P + Q D-dimensional feature vectors, denoted as D × (P + Q) in fig. 2, can be obtained as target feature information (which may be referred to as image features 224). The second feature extraction network 222 may be, for example, PointNet, and the last layer of the network may employ an MLP layer. By performing depth feature extraction on the area point cloud information of the target in the point cloud information through the second feature extraction network 222 and performing MLP layer processing, P + Q D-dimensional feature vectors, which are denoted as D × (P + Q) in fig. 2, can be obtained as target feature information (which may be referred to as point cloud features 225). P and Q indicate the number of objects in the image information 212 and the point cloud information 213 at the first time and the second time, respectively, and P is 8 and Q is 7, for example. The first feature extraction network 221 may also be other types of convolutional neural networks, and the present disclosure does not limit the network types of the first feature extraction network 221 and the second feature extraction network 222.
In this example, the fusion network 223 may be used to fuse the image information and the target feature information of the point cloud information to obtain target fusion features, that is, P + Q D-dimensional feature vectors.
In the association detection step 23, three sets of feature vectors, namely, the image feature 224, the point cloud feature 225, and the fusion feature, may be combined into the feature vector 226 of 3 × D × P and 3 × D × Q, and the feature vector 226 is associated to obtain the association matrix 231 (i.e., each feature vector is pieced into a matrix) as the association feature between the targets. The size of the correlation matrix is 3 × D × P × Q, where 3 denotes the number of batches of features (image features, point cloud features, and fusion features), D denotes the dimension (e.g., 512) of each feature vector, P denotes the number of objects in the environment information at the first time (e.g., P ═ 8), and Q denotes the number of objects in the environment information at the second time (e.g., Q ═ 7). The correlation matrix 231 may be input into the correlation prediction network, respectively, to obtain the correlation prediction result.
In this example, as shown in fig. 2, the relevance prediction network includes a relevance prediction sub-network 232, a confidence prediction sub-network 233, and a starting point and ending point prediction sub-network 234, and is used for predicting the probability (also referred to as relevance) that a first target at a first time and a second target at a second time are the same target, the confidence of predicting the first target and the second target, the probability of predicting the first target as an ending point target, and the probability of predicting the second target as a starting point target, respectively. The relevance prediction sub-network 232, the confidence prediction sub-network 233, and the starting point and ending point prediction sub-network 234 may include a convolutional layer and a softmax layer, respectively, and the network structure of each sub-network is not limited by the present disclosure.
As shown in fig. 2, after processing, the relevance prediction results of each target can be obtained, including the relevance degree prediction result confidence degree prediction result starting point prediction result and the end point prediction result
In the linear programming step 24, the initial prediction result and the end prediction result of the confidence prediction result of the relevance prediction result may be processed through the linear programming network 241 (for example, including the softmax layer), and then linear programming is performed, and only the prediction result with the prediction probability exceeding the preset threshold is retained, so as to obtain the final relevance prediction result Ylink, the confidence prediction result Ytrue, the initial prediction result Ynew, and the end prediction result ynd.
Fig. 3 shows a flow chart of a neural network training method according to an embodiment of the present disclosure, as shown in fig. 3, the neural network training method includes:
in step S31, a training data set is obtained, the training data set comprising: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2;
In step S32, the neural network-based feature extraction module performs target feature extraction on the environmental information collected by each sensor, respectively, to obtain target feature information of each environmental information;
In step S33, fusing each target feature information based on the feature fusion module of the neural network to obtain a target fusion feature;
In step S34, the prediction module based on the neural network performs target detection according to each target feature information and target fusion feature, respectively, to obtain X +1 detection results of the target;
The modules included in the neural network may be stacked according to a certain result by using one or more of convolutional layers, pooling layers, nonlinear layers or other types of network units, and the structure and composition of each module are not limited by the embodiments of the present disclosure. In the step, the prediction module performs target detection independently according to each type of target characteristic information to obtain a target detection result, so that X types of target characteristic information correspond to X types of target detection results; in addition, the prediction module also carries out target detection according to target fusion characteristics of the various target characteristic information, and a target detection result is also obtained.
In step S35, the network parameters of the neural network are adjusted according to the difference between the target X +1 detection result and the label information.
comparing each target detection result in the X +1 types of target detection results with the labeled information of the targets in the image respectively, obtaining the difference between each target detection result and the labeled information by adopting a mode of calculating the loss value between each detection result and the labeled information by adopting a preset loss function, carrying out back propagation on the difference in the neural network, and adjusting network parameters such as convolution kernels, weights and the like of the neural network, thereby completing one-time iterative training of the neural network. After the network parameters are adjusted, a new batch of training data is input into the neural network again to perform other iterative training similar to the method. And repeatedly carrying out a plurality of iterations of training until the training of the neural network meets the preset training completion condition, wherein the difference falls into an allowable tolerance range, the loss value is smaller than a set threshold value, or the training iteration number exceeds a set threshold value, and the like, so that the trained neural network is obtained. The trained neural network not only learns the capability of carrying out accurate target detection according to the data of a single sensor, but also learns the capability of carrying out accurate target detection according to the data of multi-sensor fusion.
For example, the neural network may be trained prior to applying the neural network of the target detection method of embodiments of the present disclosure. The smart device may, for example, comprise any of a smart vehicle, a smart robot, a smart robotic arm. The smart device may be provided with a plurality of sensors, which may for example comprise at least one of a camera, a lidar, a millimeter wave radar. The present disclosure does not limit the type of smart device, the number and type of sensors.
In one possible implementation, a training data set may be obtained in step S31. X sensors can be arranged on the intelligent equipment, and X is a positive integer greater than or equal to 2. The training data set includes environmental information acquired by X sensors and labeled information of targets in the environmental information, such as image information acquired by a camera and area frames of targets labeled in the image information, point cloud information acquired by a laser radar and area frames of targets labeled in the point cloud information, and the like.
In one possible implementation, the targets in the environment information may include dynamic targets or static targets such as vehicles, pedestrians, road signs, traffic signs and the like in the environment, and the specific types of the targets are not limited by the disclosure. For visualization, different types of targets can be marked by area frames with different colors, and the same target in different environment information is assigned with the same identification ID. Machine labeling or manual labeling can be adopted, and labeling can also be carried out in other modes, and the method is not limited by the disclosure.
In a possible implementation manner, in step S32, the characteristic extraction module based on the neural network may perform characteristic extraction on the target for the environmental information collected by each sensor, respectively, to obtain target characteristic information of each environmental information. The feature extraction module may include a target detection network and a feature extraction network. The target in the environmental information can be detected through a target detection network, and the area where the target in the environmental information is located is determined; and performing feature extraction on the region where the target is located through a feature extraction network to obtain target feature information. The target detection network and the feature extraction network may both include a convolutional neural network, and the network structure of the target detection network and the feature extraction network is not limited by the disclosure.
In a possible implementation manner, in step S33, the target fusion feature may be obtained by fusing each target feature information based on the feature fusion module of the neural network. That is, the target feature information of the X pieces of environment information is subjected to processing such as splicing and adding, so as to obtain the target fusion feature.
In a possible implementation manner, in step S34, the neural network-based prediction module may perform target detection according to each target feature information and the target fusion feature, respectively, to obtain X +1 detection results of the target. The prediction module may include a plurality of target detection networks respectively corresponding to the target feature information and the target fusion feature, and X +1 detection results of the target may be obtained by respectively detecting the target of each target feature information and the target fusion feature.
In a possible implementation manner, for the environment information at one time, the X +1 detection results may respectively include detection results of an accurate position, a target type, and the like of a target in the environment information, and for the environment information at a plurality of times, the X +1 detection results may respectively include prediction results of a correlation between targets in the environment information at the plurality of times. The present disclosure does not limit the type of the detection result.
In a possible implementation manner, in step S35, according to the difference between the target X +1 detection result and the label information, the network loss of the neural network may be determined, and then the network parameters of the neural network, such as the weight parameter in the convolutional layer, the weight parameter in the pooling layer, and the like, may be adjusted reversely according to the network loss. After multiple parameter adjustments, the trained neural network can be obtained when the preset conditions are met.
In one possible implementation, the loss of each module of the neural network can be respectively determined according to the difference between the X +1 detection result of the target and the labeling information, so that each module of the neural network is respectively trained; the overall loss of each module of the neural network may also be determined, thereby jointly training each module of the neural network. The present disclosure is not limited to a particular mode of training.
According to the neural network training method disclosed by the embodiment of the disclosure, the target characteristic information can be extracted according to the environmental information acquired by the multiple sensors of the intelligent equipment, the multiple detection results of the target are respectively determined according to the multiple target characteristic information and the target fusion characteristic thereof, the neural network is trained according to the multiple detection results and the labeling information, the trained neural network not only learns the capability of accurately detecting the target according to the data of a single sensor, but also learns the capability of accurately detecting the target according to the fusion data of the multiple sensors, and the detection of the target under various different meteorological conditions or conditions in a complex scene such as intelligent driving application can be met, so that the reliability of target detection is improved, and the driving safety is improved.
In one possible implementation, step S32 may include: respectively carrying out target detection on the environmental information acquired by the X sensors, and determining the area information of the target in the environmental information of the X sensors;
And respectively extracting the characteristics of the information of each area to obtain target characteristic information of the environmental information of the X sensors.
For example, the target detection network may be used to detect the targets in the environmental information of the X sensors, respectively, and determine the area information of the area where the targets are located in the environmental information. The target detection network may be a two-dimensional or three-dimensional target detection network, and may be capable of detecting two-dimensional or three-dimensional area information. The present disclosure is not limited as to the type of target detection network.
In one possible implementation, the respective environment information may be processed by the same object detection network. For different environment information, the target detection network may directly detect the area information where the target is located, and may also need to convert.
For example, when a two-dimensional object detection network is used to detect an object in image information and point cloud information, the object detection network may directly detect a two-dimensional rectangular frame from the image information, and determine an area in the rectangular frame as area information where the object in the image information is located; for the point cloud information, the target detection network can detect a two-dimensional rectangular frame, the two-dimensional rectangular frame needs to be projected into a three-dimensional view cone, and an area in the view cone is determined as area information where the target is located, so that the next processing can be performed.
for example, when a three-dimensional target detection network is used to detect a target in image information and point cloud information, the target detection network can directly detect a three-dimensional view cone from the point cloud information and determine an area in the view cone as area information where the target is located; for the image information, the target detection network can detect a three-dimensional vertebral body, the three-dimensional vertebral body needs to be projected into a two-dimensional rectangular frame, and the area in the rectangular frame is determined as the area information of the target so as to be convenient for the next processing. The present disclosure does not limit the selection and training of the target detection network and the specific processing manner of the region frame.
In this way, the detected area information can be made to correspond so as to improve the consistency of detection.
In a possible implementation manner, after obtaining the region information where the target in each environmental information is located, feature extraction may be performed on each region information. Wherein, the feature extraction can be respectively carried out through the feature extraction network corresponding to each environment information. For example, for image information, a convolutional neural network can be used for feature extraction; for point cloud information, a deep neural network can be used for feature extraction. For other types of sensing data, corresponding feature extraction networks can be designed for feature extraction. The present disclosure does not limit the network structure of the feature extraction network corresponding to each category of sensing data. In this way, features can be extracted by the feature extraction network adapted to different categories, thereby extracting information in the sensed data more efficiently.
In one possible implementation, a pooling layer may be provided at the end of each feature extraction network, and the extracted features are converted into feature vectors (with a vector length of D, for example, D512) by pooling, and the feature vectors are used as target feature information of each environment information.
In one possible implementation, the X sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, and the area information of the point cloud information includes area point cloud information.
the step of extracting the features of the information of each area to obtain the target feature information of the environmental information of the X sensors may include: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
That is, for the point cloud information collected by the laser radar, the three-dimensional view cone of the target in the point cloud information can be detected through the target detection network and used as the regional point cloud information. And performing depth feature extraction on the regional point cloud information through a deep neural network to obtain the target point cloud depth feature of the point cloud information. The present disclosure is not limited to a particular type of deep neural network.
By the method, the depth characteristics of the point cloud information can be extracted, and richer information can be obtained.
in one possible implementation, after obtaining the target characteristic information of the environmental information collected by the X sensors, the fusion may be performed in step S33. Wherein, the step S33 may include:
And performing any one of splicing, adding and attention weight-based adding processing on the target characteristic information of the environmental information acquired by the X sensors to obtain the target fusion characteristic.
For example, the target feature information may be fused in various ways, such as concatenation, addition, and attention-weight-based addition. For example, when the target feature information is a D-dimensional feature vector, L feature vectors of the environmental information may be spliced to obtain an L × D-dimensional feature vector; and performing point-wise convolution (in English, point-wise convolution) on the L multiplied by D dimensional feature vector according to a preset weight, so that the dimension of the output vector is the same as that of the original feature vector, and the target fusion feature is obtained. The preset weight may be the same as or different from the feature fusion weight, and the specific value of the preset weight is not limited in this disclosure.
In a possible implementation manner, the target feature information of X pieces of environment information may be directly added to obtain the target fusion feature. In a possible implementation manner, the attention weight of each target feature information may be determined according to the attention mechanism, and the target fusion features are obtained by adding each target feature information based on the attention weight.
By the method, the target fusion characteristics can be obtained, the target detection is carried out according to the target fusion characteristics, and the reasoning precision of the system can be obviously improved.
In a possible implementation manner, the environment information collected by the X sensors at least includes X third environment information at a third time and X fourth environment information at a fourth time after the third time, and the training data set includes association information between a third target of the X third environment information and a fourth target of the X fourth environment information, where the method further includes:
a prediction module based on a neural network performs relevance prediction according to each target feature information and target fusion feature of the X pieces of third environment information and each target feature information and target fusion feature of the X pieces of fourth environment information to obtain a relevance prediction result of the third target and the fourth target;
and adjusting the network parameters of the neural network according to the difference between the correlation prediction results and the correlation information of the third target and the fourth target.
For example, target tracking may be performed for targets in the environmental information at multiple times. Respectively extracting the features of the target according to the X pieces of third environment information at the third moment and the X pieces of fourth environment information at the fourth moment to obtain X pieces of third environment information and X pieces of target feature information of the fourth environment information; and fusing the target characteristic information of the X pieces of third environment information, and fusing the target characteristic information of the X pieces of fourth environment information to obtain the target fusion characteristic of the third target in the X pieces of third environment information and the target fusion characteristic of the fourth target in the X pieces of fourth environment information.
In a possible implementation manner, a prediction module based on a neural network (also referred to as a relevance prediction network) may perform relevance prediction according to the target feature information and the target fusion feature of the X pieces of third environment information and the target feature information and the target fusion feature of the X pieces of fourth environment information, so as to obtain X +1 relevance prediction results of the third target and the fourth target, for example, whether the third target and the fourth target are the same target, the credibility of the third target and the fourth target, and the like.
In one possible implementation, the associated prediction result of the third target and the fourth target includes at least one of a probability that the third target and the fourth target are the same target, a probability that the third target is an end target, a probability that the fourth target is a start target, and a confidence of the third target and the fourth target,
Wherein the end point target represents a target in the third environment information but not in the fourth environment information, and the start point target represents a target in the fourth environment information but not in the third environment information.
For example, the correlation prediction result may include a correlation probability that the third target and the fourth target are the same target, and if the correlation probability is greater than or equal to a preset threshold, the third target and the fourth target may be considered as the same target; conversely, if the association probability is less than a preset threshold, the third target and the fourth target may be considered not to be the same target.
In one possible implementation, the associated prediction result may include a confidence level of the third target and the fourth target, i.e., a credibility level of the third target and the fourth target. If the confidence is greater than or equal to a preset confidence threshold, the third target or the fourth target can be considered as a real target; otherwise, if the confidence is smaller than the preset confidence threshold, the third target or the fourth target may be considered as a false target, and the target may be screened out from the detection result.
In one possible implementation, the associated prediction result may include a probability that the third target is an end target, which may represent a target in the third environment information but not in the fourth environment information, such as a vehicle leaving the collection area at the fourth time. If the probability is greater than or equal to a preset endpoint probability threshold, the third target may be considered an endpoint target; conversely, if the probability is less than a preset endpoint probability threshold, the third target may be considered to be not an endpoint target.
In one possible implementation, the associated predicted result may include a probability that the fourth target is an origin target, and the origin target may represent a target in the fourth environment information but not in the third environment information, such as a vehicle present in the collection area at the fourth time. If the probability is greater than or equal to a preset starting point probability threshold, the fourth target can be considered as a starting point target; conversely, if the probability is less than a preset starting point probability threshold, the fourth target may be considered not to be a starting point target.
In this way, the relevance between the targets in the environmental information at different moments can be predicted, so that the trajectory tracking of the targets is realized.
In one possible implementation, the training data set includes association information between third targets of the X third environmental information and fourth targets of the X fourth environmental information. The association information may include assigning the same identification ID to the same target in different environment information, and establishing an adjacency matrix of the same target in different environment information so as to indicate that the object is in the same tracking track. The present disclosure is not limited to the specific manner in which the adjacency matrix is obtained by conversion.
In a possible implementation manner, the network loss of the neural network may be determined according to a difference between the correlation prediction result and the correlation information of the third target and the fourth target, and then the network parameter of the neural network may be adjusted according to the network loss.
When the relevance prediction result includes a probability that the third object and the fourth object are the same object (not referred to as a relevance prediction result), a probability that the third object is an end point object (not referred to as an end point prediction result), a probability that the fourth object is a start point object (not referred to as a start point prediction result), and confidences of the third object and the fourth object (not referred to as confidence prediction results), the network loss L of the neural network can be expressed as:
L=L+αL+γL+βL (1)
In equation (1), Llink may represent a correlation prediction loss; ltrue may represent confidence prediction loss; lstart can represent the starting point prediction loss; bond may represent endpoint prediction loss; alpha, gamma, beta may represent the weight of the start point prediction loss, the end point prediction loss, and the confidence prediction loss, respectively. The correlation degree prediction loss can adopt a smooth L1 loss or an L2 loss; and the starting point prediction loss, the end point prediction loss and the confidence degree prediction loss can adopt cross entropy loss functions. The selection of each loss function and the specific value of the weight are not limited in the present disclosure.
By the method, the trained neural network can be obtained, and the target detection method disclosed by the embodiment of the disclosure can be used for detecting the target by applying the neural network trained by the neural network training method, so that the precision and reliability of target detection and tracking are improved.
according to the target detection method disclosed by the embodiment of the disclosure, the method is suitable for a multi-target detection system and/or a multi-target tracking system of a multi-sensor, the depth representation of point cloud is introduced in the data association process of multi-target tracking, and richer information is extracted. And, the characteristic information of each sensor is extracted by using an independent neural network branch, so that each sensor can independently exert the function thereof to ensure the reliability of the system. Meanwhile, the information of multiple sensors is aggregated through a robust fusion module, the information of each single sensor is reserved, and the precision and the reliability of the system are further improved. In addition, in the network training process, the modules of the system can be jointly optimized in an end-to-end training mode.
The target detection method can be applied to application scenes such as an automatic driving system and an auxiliary driving system. The automatic driving system can input multi-sensor information such as point clouds, images and the like into the tracking system during running to obtain a tracking track of a target and help the automatic driving decision-making system to make decisions and other behaviors. The driving assistance system can input multi-sensor information such as point clouds and images into the tracking system to obtain a tracking track of a target so as to help a driver to know the surrounding environment more comprehensively. The system relies on remaining sensor to continue to track a plurality of targets after a certain sensor breaks down, guarantees that the system still works stably. The scheme can generate a result which is higher in precision and more reliable than that of the current multi-target tracking system, and is beneficial to an automatic driving system or a driver to make a better decision.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
in addition, the present disclosure also provides a target detection apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the target detection methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated.
Fig. 4 shows a block diagram of an object detection apparatus according to an embodiment of the present disclosure, which, as shown in fig. 4, includes:
The information acquisition module 41 is configured to acquire environment information of the intelligent device through N sensors arranged on the intelligent device, where N is a positive integer greater than or equal to 2; an extraction module 42, configured to perform feature extraction on the target respectively for the environmental information acquired by M sensors in the N sensors to obtain target feature information of the environmental information acquired by the M sensors, where M is a positive integer less than or equal to N; a fusion module 43, configured to fuse target feature information of the environmental information acquired by L sensors of the M sensors to obtain a target fusion feature, where L is a positive integer less than or equal to M; and a first result determining module 44, configured to determine a detection result of the target according to the target fusion feature.
In one possible implementation, the apparatus further includes: and the second result determining module is used for determining the detection result of the target according to the target characteristic information of the environmental information acquired by any one of the N sensors.
In one possible implementation, the apparatus further includes: the first screening module is used for screening out the environmental information acquired by the N-M sensors according to a first preset condition; and the second screening module is used for screening target characteristic information of the environmental information acquired by the M-L sensors according to a second preset condition.
In one possible implementation, the first preset condition includes at least one of: the sensor is in an abnormal working state, and the environmental information collected by the sensor does not meet the quality requirement.
In one possible implementation, the fusion module includes: the weight determining submodule is used for determining the feature fusion weight corresponding to each sensor in the L sensors; and the first fusion submodule is used for carrying out feature fusion according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features.
In one possible implementation, the weight determining submodule is configured to: and determining the feature fusion weight corresponding to each sensor in the L sensors according to the meteorological condition and/or the light condition of the environment where the intelligent equipment is located.
In one possible implementation, the detection result includes a correlation prediction result; the environmental information collected by the L sensors at least includes L first environmental information at a first time and L second environmental information at a second time after the first time, and the first result determination module includes: and the association prediction sub-module is configured to predict, according to the target fusion feature of the first target in the L pieces of first environment information and the target fusion feature of the second target in the L pieces of second environment information, the association between the first target and the second target, so as to obtain an association prediction result of the first target and the second target.
in one possible implementation, the associated prediction result includes at least one of a probability that the first target and the second target are the same target, a probability that the first target is an end target, a probability that the second target is a start target, and a confidence of the first target and the second target, wherein the end target represents a target in the first environment information but not in the second environment information, and the start target represents a target in the second environment information but not in the first environment information.
In one possible implementation, the extraction module includes: the detection submodule is used for respectively carrying out target detection on the environmental information acquired by the M sensors and determining the area information where the target is located in the environmental information of the M sensors; and the extraction submodule is used for respectively extracting the characteristics of the information of each area to obtain the target characteristic information of the environmental information of the M sensors.
In one possible implementation, the M sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, and the extraction sub-module is configured to: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
in one possible implementation, the fusion module includes: and the second fusion submodule is used for splicing and adding the target characteristic information of the environmental information acquired by the L sensors and performing addition processing based on attention weight to obtain the target fusion characteristic.
in a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the N sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
Fig. 5 shows a block diagram of a neural network training device according to an embodiment of the present disclosure, as shown in fig. 5, the neural network training device including:
A data set obtaining module 51, configured to obtain a training data set, where the training data set includes: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2; the feature extraction module 52 is configured to perform feature extraction on the target respectively according to the environmental information acquired by each sensor, so as to obtain target feature information of each environmental information; a feature fusion module 53, configured to fuse each target feature information to obtain a target fusion feature; the prediction module 54 is configured to perform target detection according to each target feature information and the target fusion feature, respectively, to obtain X +1 detection results of the target; and the parameter adjusting module 55 is configured to adjust a network parameter of the neural network according to a difference between the target X +1 detection result and the labeled information.
In a possible implementation manner, the environmental information acquired by the X sensors at least includes X third environmental information at a third time and X fourth environmental information at a fourth time after the third time, the training data set includes correlation information between a third target of the X third environmental information and a fourth target of the X fourth environmental information, and the prediction module is further configured to perform correlation prediction according to each target feature information and target fusion feature of the X third environmental information and each target feature information and target fusion feature of the X fourth environmental information, so as to obtain a correlation prediction result of the third target and the fourth target; the parameter adjusting module is further configured to adjust a network parameter of the neural network according to a difference between the correlation prediction result and the correlation information of the third target and the fourth target.
In one possible implementation, the associated prediction result of the third target and the fourth target includes at least one of a probability that the third target and the fourth target are the same target, a probability that the third target is an end target, a probability that the fourth target is a start target, and a confidence of the third target and the fourth target, wherein the end target represents a target in the third environment information but not in the fourth environment information, and the start target represents a target in the fourth environment information but not in the third environment information.
In one possible implementation, the feature extraction module includes: the target detection submodule is used for respectively carrying out target detection on the environmental information acquired by the X sensors and determining the area information where the target is located in the environmental information of the X sensors; and the characteristic extraction submodule is used for respectively extracting the characteristics of the information of each area to obtain the target characteristic information of the environmental information of the X sensors.
In one possible implementation, the X sensors include a laser radar, the environment information collected by the laser radar includes point cloud information, the area information of the point cloud information includes area point cloud information, and the feature extraction sub-module is configured to: and carrying out depth feature extraction on the regional point cloud information of the point cloud information to obtain the target point cloud depth feature of the point cloud information.
In one possible implementation, the feature fusion module includes: and the feature fusion submodule is used for splicing and adding the target feature information of the environmental information acquired by the X sensors and performing addition processing based on attention weight to obtain the target fusion feature.
In a possible implementation manner, the smart device includes any one of a smart vehicle, a smart robot, and a smart mechanical arm, and the X sensors include at least one of a camera, a laser radar, and a millimeter-wave radar.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a vehicle-mounted device, or the like.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
the memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of object detection, comprising:
Respectively acquiring environmental information of the intelligent equipment through N sensors arranged on the intelligent equipment, wherein N is a positive integer greater than or equal to 2;
Respectively extracting the characteristics of a target from the environmental information acquired by M sensors in the N sensors to obtain the target characteristic information of the environmental information acquired by the M sensors, wherein M is a positive integer less than or equal to N;
fusing target characteristic information of the environmental information acquired by L sensors in the M sensors to obtain target fusion characteristics, wherein L is a positive integer less than or equal to M;
And determining the detection result of the target according to the target fusion characteristics.
2. the method of claim 1, further comprising:
And determining the detection result of the target according to the target characteristic information of the environmental information acquired by any one of the N sensors.
3. The method of claim 1 or 2, further comprising:
Screening out environmental information acquired by the N-M sensors according to a first preset condition;
And screening out target characteristic information of the environmental information acquired by the M-L sensors according to a second preset condition.
4. the method of claim 3, wherein the first preset condition comprises at least one of: the sensor is in an abnormal working state, and the environmental information collected by the sensor does not meet the quality requirement.
5. the method according to any one of claims 1 to 4, wherein fusing target feature information of the environmental information collected by L sensors of the M sensors to obtain a target fusion feature comprises:
determining a feature fusion weight corresponding to each sensor of the L sensors;
And performing feature fusion according to the feature fusion weight corresponding to each sensor in the L sensors and the target feature information to obtain target fusion features.
6. A neural network training method, comprising:
Obtaining a training data set, the training data set comprising: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2;
The method comprises the steps that a neural network-based feature extraction module respectively extracts the features of a target from environmental information collected by each sensor to obtain target feature information of each environmental information;
Fusing each target feature information based on the feature fusion module of the neural network to obtain a target fusion feature;
The prediction module based on the neural network respectively carries out target detection according to the target characteristic information and the target fusion characteristics to obtain X +1 detection results of the targets;
And adjusting the network parameters of the neural network according to the difference between the X +1 detection result of the target and the labeled information.
7. An object detection device, comprising:
The information acquisition module is used for respectively acquiring the environmental information of the intelligent equipment through N sensors arranged on the intelligent equipment, wherein N is a positive integer greater than or equal to 2;
the extraction module is used for respectively extracting the characteristics of targets from the environmental information acquired by M sensors in the N sensors to obtain the target characteristic information of the environmental information acquired by the M sensors, wherein M is a positive integer less than or equal to N;
The fusion module is used for fusing target characteristic information of the environmental information acquired by L sensors in the M sensors to obtain target fusion characteristics, wherein L is a positive integer less than or equal to M;
And the first result determining module is used for determining the detection result of the target according to the target fusion characteristic.
8. A neural network training device, comprising:
A dataset acquisition module to acquire a training dataset, the training dataset comprising: respectively acquiring environmental information of the intelligent equipment and label information of a target in the environmental information through X sensors arranged on the intelligent equipment, wherein X is a positive integer greater than or equal to 2;
The characteristic extraction module is used for respectively extracting the characteristics of the target from the environmental information acquired by each sensor to obtain the target characteristic information of each environmental information;
The characteristic fusion module is used for fusing the characteristic information of each target to obtain target fusion characteristics;
The prediction module is used for respectively carrying out target detection according to the target characteristic information and the target fusion characteristic to obtain X +1 detection results of the target;
And the parameter adjusting module is used for adjusting the network parameters of the neural network according to the difference between the X +1 detection result of the target and the labeled information.
9. an electronic device, comprising:
a processor;
A memory for storing processor-executable instructions;
Wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 6.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 6.
CN201910816348.0A 2019-08-30 2019-08-30 Target detection method and device and neural network training method and device Active CN110543850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910816348.0A CN110543850B (en) 2019-08-30 2019-08-30 Target detection method and device and neural network training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910816348.0A CN110543850B (en) 2019-08-30 2019-08-30 Target detection method and device and neural network training method and device

Publications (2)

Publication Number Publication Date
CN110543850A true CN110543850A (en) 2019-12-06
CN110543850B CN110543850B (en) 2022-07-22

Family

ID=68711072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910816348.0A Active CN110543850B (en) 2019-08-30 2019-08-30 Target detection method and device and neural network training method and device

Country Status (1)

Country Link
CN (1) CN110543850B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179247A (en) * 2019-12-27 2020-05-19 上海商汤智能科技有限公司 Three-dimensional target detection method, training method of model thereof, and related device and equipment
CN111340873A (en) * 2020-02-28 2020-06-26 广东工业大学 Method for measuring and calculating object minimum outer envelope size of multi-view image
CN112036653A (en) * 2020-09-07 2020-12-04 江苏金鸽网络科技有限公司 Fire risk early warning method and system based on Bayesian network
CN112098985A (en) * 2020-09-09 2020-12-18 杭州中芯微电子有限公司 UWB positioning method based on millimeter wave detection
CN112733907A (en) * 2020-12-31 2021-04-30 上海商汤临港智能科技有限公司 Data fusion method and device, electronic equipment and storage medium
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
CN113495009A (en) * 2021-05-24 2021-10-12 柳州龙燊汽车部件有限公司 Quality detection method and system for matching manufacturing of carriage
CN114462536A (en) * 2022-02-09 2022-05-10 国网宁夏电力有限公司吴忠供电公司 Method and system for generating labeled data set in entity scene

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103557884A (en) * 2013-09-27 2014-02-05 杭州银江智慧城市技术集团有限公司 Multi-sensor data fusion early warning method for monitoring electric transmission line tower
CN106469315A (en) * 2016-09-05 2017-03-01 南京理工大学 Based on the multi-mode complex probe target identification method improving One Class SVM algorithm
CN106840242A (en) * 2017-01-23 2017-06-13 驭势科技(北京)有限公司 The sensor self-checking system and multi-sensor fusion system of a kind of intelligent driving automobile
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities
CN108875674A (en) * 2018-06-29 2018-11-23 东南大学 A kind of driving behavior recognition methods based on multiple row fusion convolutional neural networks
CN109532719A (en) * 2018-11-23 2019-03-29 中汽研(天津)汽车工程研究院有限公司 One kind being based on electric car combined of multi-sensor information
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103557884A (en) * 2013-09-27 2014-02-05 杭州银江智慧城市技术集团有限公司 Multi-sensor data fusion early warning method for monitoring electric transmission line tower
CN106469315A (en) * 2016-09-05 2017-03-01 南京理工大学 Based on the multi-mode complex probe target identification method improving One Class SVM algorithm
CN106840242A (en) * 2017-01-23 2017-06-13 驭势科技(北京)有限公司 The sensor self-checking system and multi-sensor fusion system of a kind of intelligent driving automobile
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities
CN108875674A (en) * 2018-06-29 2018-11-23 东南大学 A kind of driving behavior recognition methods based on multiple row fusion convolutional neural networks
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109532719A (en) * 2018-11-23 2019-03-29 中汽研(天津)汽车工程研究院有限公司 One kind being based on electric car combined of multi-sensor information
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
VALENTIN VIELZEUF ET AL.: "Multilevel Sensor Fusion With Deep Learning", 《IEEE SENSORS LETTERS》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179247A (en) * 2019-12-27 2020-05-19 上海商汤智能科技有限公司 Three-dimensional target detection method, training method of model thereof, and related device and equipment
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
CN111340873A (en) * 2020-02-28 2020-06-26 广东工业大学 Method for measuring and calculating object minimum outer envelope size of multi-view image
CN111340873B (en) * 2020-02-28 2023-05-23 广东工业大学 Object minimum outer envelope size measuring and calculating method for multi-view image
CN112036653A (en) * 2020-09-07 2020-12-04 江苏金鸽网络科技有限公司 Fire risk early warning method and system based on Bayesian network
CN112098985A (en) * 2020-09-09 2020-12-18 杭州中芯微电子有限公司 UWB positioning method based on millimeter wave detection
CN112098985B (en) * 2020-09-09 2024-04-12 杭州中芯微电子有限公司 UWB positioning method based on millimeter wave detection
CN112733907A (en) * 2020-12-31 2021-04-30 上海商汤临港智能科技有限公司 Data fusion method and device, electronic equipment and storage medium
CN113495009A (en) * 2021-05-24 2021-10-12 柳州龙燊汽车部件有限公司 Quality detection method and system for matching manufacturing of carriage
CN114462536A (en) * 2022-02-09 2022-05-10 国网宁夏电力有限公司吴忠供电公司 Method and system for generating labeled data set in entity scene

Also Published As

Publication number Publication date
CN110543850B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN110543850B (en) Target detection method and device and neural network training method and device
CN111339846B (en) Image recognition method and device, electronic equipment and storage medium
CN109829501B (en) Image processing method and device, electronic equipment and storage medium
CN111340766B (en) Target object detection method, device, equipment and storage medium
CN110378976B (en) Image processing method and device, electronic equipment and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
CN112001321A (en) Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium
CN112149740B (en) Target re-identification method and device, storage medium and equipment
CN110443366B (en) Neural network optimization method and device, and target detection method and device
CN111881956A (en) Network training method and device, target detection method and device and electronic equipment
CN111881827B (en) Target detection method and device, electronic equipment and storage medium
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
JP2021512378A (en) Anchor determination method and equipment, electronic devices and storage media
CN113326768A (en) Training method, image feature extraction method, image recognition method and device
CN113841179A (en) Image generation method and device, electronic device and storage medium
US20220383517A1 (en) Method and device for target tracking, and storage medium
CN110543849A (en) detector configuration method and device, electronic equipment and storage medium
CN113066135A (en) Calibration method and device of image acquisition equipment, electronic equipment and storage medium
CN111523555A (en) Image processing method and device, electronic equipment and storage medium
CN113313115B (en) License plate attribute identification method and device, electronic equipment and storage medium
CN112598676B (en) Image segmentation method and device, electronic equipment and storage medium
CN112381858B (en) Target detection method, device, storage medium and equipment
CN111860074B (en) Target object detection method and device, and driving control method and device
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN113496237A (en) Domain-adaptive neural network training and traffic environment image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant