WO2023273060A1 - Dangerous action identifying method and apparatus, electronic device, and storage medium - Google Patents

Dangerous action identifying method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023273060A1
WO2023273060A1 PCT/CN2021/126895 CN2021126895W WO2023273060A1 WO 2023273060 A1 WO2023273060 A1 WO 2023273060A1 CN 2021126895 W CN2021126895 W CN 2021126895W WO 2023273060 A1 WO2023273060 A1 WO 2023273060A1
Authority
WO
WIPO (PCT)
Prior art keywords
occupant
image
human body
point
coordinates
Prior art date
Application number
PCT/CN2021/126895
Other languages
French (fr)
Chinese (zh)
Inventor
王飞
钱晨
Original Assignee
上海商汤临港智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤临港智能科技有限公司 filed Critical 上海商汤临港智能科技有限公司
Priority to JP2023544368A priority Critical patent/JP2024506809A/en
Publication of WO2023273060A1 publication Critical patent/WO2023273060A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/52Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking for indicating emergencies
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05FDEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION; CHECKS FOR WINGS; WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
    • E05F15/00Power-operated mechanisms for wings
    • E05F15/70Power-operated mechanisms for wings with automatic actuation
    • E05F15/73Power-operated mechanisms for wings with automatic actuation responsive to movement or presence of persons or objects
    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05YINDEXING SCHEME ASSOCIATED WITH SUBCLASSES E05D AND E05F, RELATING TO CONSTRUCTION ELEMENTS, ELECTRIC CONTROL, POWER SUPPLY, POWER SIGNAL OR TRANSMISSION, USER INTERFACES, MOUNTING OR COUPLING, DETAILS, ACCESSORIES, AUXILIARY OPERATIONS NOT OTHERWISE PROVIDED FOR, APPLICATION THEREOF
    • E05Y2900/00Application of doors, windows, wings or fittings thereof
    • E05Y2900/50Application of doors, windows, wings or fittings thereof for vehicles
    • E05Y2900/53Type of wing
    • E05Y2900/55Windows

Definitions

  • the present disclosure relates to the technical field of smart car cabins, and in particular to a method and device for identifying dangerous actions, electronic equipment, and a storage medium.
  • Cabin intelligence is an important direction for the current development of the automotive industry.
  • the intelligence of the cabin includes the intelligence of multi-mode interaction, personalized service, and safety perception.
  • safety perception smart vehicles aim to provide occupants with a safe cabin environment. Identifying dangerous actions of people in the cabin is of great significance to the safety of the occupants in the cabin.
  • the present disclosure provides a technical solution for identifying dangerous actions.
  • a method for identifying a dangerous action including:
  • the dangerous Motion represents the motion of a preset body part sticking out of the car window.
  • dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result to obtain the occupant's corresponding
  • the results of dangerous action recognition including:
  • a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
  • the position information of the occupant includes the coordinates of a human body center prediction point in the image
  • the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
  • obtaining a dangerous action recognition result corresponding to the occupant based on the occupant's position information in the occupant detection result and the dangerous action prediction information includes:
  • the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
  • the occupant's position information includes the coordinates of a human body center prediction point in the image
  • performing occupant detection on the cabin to obtain occupant detection results in the cabin including:
  • the coordinates of the predicted point of the center of the human body in the image are determined.
  • the predicting the probability that a pixel in the image belongs to a human center point based on the first feature map corresponding to the image includes: based on the first feature map corresponding to the image, determining the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point;
  • the determining the coordinates of the human body center prediction point in the image based on the probability that the pixel point in the image belongs to the human body center point includes: based on the coordinates of the first candidate point and the fact that the first candidate point belongs to the human body
  • the probability of the center point determines the coordinates of the predicted point of the center of the human body in the image.
  • the coordinates of the first candidate point of the human body center point in the image and the coordinates of the first candidate point belonging to the human body center point are determined. Probability, including:
  • a maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belongs to the human body center Point probabilities, including:
  • a maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belongs to the human body center Point probability, including: performing an overlapping maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point ;
  • the determining the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point includes: for the first candidate point with the same coordinates Merge to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body; according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the The coordinates of the predicted point of the human body center in the image.
  • the identifying dangerous actions in the cabin environment based on the image, and obtaining the dangerous action prediction information corresponding to the cabin includes:
  • the method further includes:
  • the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
  • the method further includes:
  • a prompt message is issued.
  • the sending of prompt information includes at least one of the following:
  • a device for identifying dangerous actions including:
  • the acquisition module is used to acquire the image of the cabin
  • An occupant detection module configured to perform occupant detection on the vehicle cabin based on the image, and obtain an occupant detection result of the vehicle cabin;
  • the first dangerous action identification module is configured to respond to the occupant detection result indicating that an occupant is detected, perform dangerous action identification based on the image and the occupant's position information in the occupant detection result, and obtain the corresponding danger of the occupant Action recognition results, wherein the dangerous action represents an action in which a predetermined body part sticks out of a car window.
  • the first dangerous action recognition module is configured to:
  • a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
  • the position information of the occupant includes the coordinates of a human body center prediction point in the image
  • the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
  • the first dangerous action recognition module is used for:
  • the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
  • the occupant's position information includes the coordinates of a human body center prediction point in the image
  • the occupant detection module is used for:
  • the coordinates of the predicted point of the center of the human body in the image are determined.
  • the occupant detection module is used for:
  • the occupant detection module is used for:
  • a maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the occupant detection module is used for:
  • a maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the occupant detection module is used for:
  • the first dangerous action recognition module is configured to:
  • the device further includes:
  • the second dangerous action recognition module is configured to respond to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, and perform dangerous actions based on the image and the occupant's position information in the occupant detection result. Action recognition, obtaining the dangerous action recognition result corresponding to the occupant.
  • the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
  • the device further includes:
  • a prompting module configured to issue prompting information in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action.
  • the prompt module is used for at least one of the following:
  • an electronic device comprising: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to call the memory storage executable instructions to perform the above method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
  • a computer program product including computer readable codes, or a non-volatile computer readable storage medium bearing computer readable codes, when the computer readable codes are stored in an electronic device During operation, the processor in the electronic device executes the above method.
  • the occupant detection is performed on the vehicle cabin, the occupant detection result of the vehicle cabin is obtained, and the occupant is detected in response to the occupant detection result performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action recognition result corresponding to the occupant, wherein the dangerous action represents a predetermined body part sticking out of the window Therefore, based on the position of the occupant, the occupant's action of extending the preset body part out of the vehicle window can be accurately recognized, thereby improving the safety of the occupant in the vehicle cabin.
  • FIG. 1 shows a flow chart of a method for identifying a dangerous action provided by an embodiment of the present disclosure.
  • Fig. 2 shows a schematic diagram of the occupant's head protruding out of the vehicle window in the image of the vehicle cabin in the method for identifying dangerous actions provided by an embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of a passenger's hand or arm sticking out of a vehicle window in an image of a vehicle cabin in the method for identifying a dangerous action provided by an embodiment of the present disclosure.
  • Fig. 4 shows a schematic diagram of an application scenario of the method for identifying a dangerous action provided by the present disclosure.
  • Fig. 5 shows a block diagram of an apparatus for identifying a dangerous action provided by an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
  • the driver or other occupants may make dangerous actions of sticking their hands, head or other body parts out of the vehicle window, which may lead to serious accidents.
  • Embodiments of the present disclosure provide a method and device for identifying dangerous actions, electronic equipment, and a storage medium.
  • a method and device for identifying dangerous actions, electronic equipment, and a storage medium By acquiring an image of a vehicle cabin, and based on the image, perform occupant detection on the vehicle cabin, and obtain the occupant of the vehicle cabin detection result, and in response to the occupant detection result indicating that an occupant is detected, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining the occupant's corresponding dangerous action recognition result, wherein , the dangerous action represents an action in which a predetermined body part sticks out of the vehicle window, thereby accurately identifying the movement of the occupant to extend the predetermined body part out of the vehicle window based on the position of the occupant, thereby improving the safety of the occupant in the vehicle cabin sex.
  • FIG. 1 shows a flow chart of a method for identifying a dangerous action provided by an embodiment of the present disclosure.
  • the method for identifying a dangerous action may be executed by a terminal device or a server or other processing device.
  • the terminal device may be a vehicle-mounted device, a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, or a Wearable equipment etc.
  • UE User Equipment
  • PDA Personal Digital Assistant
  • the on-board device can be the car machine, domain controller or processor in the cabin, and can also be used in DMS (Driver Monitor System, driver monitoring system) or OMS (Occupant Monitoring System, occupant monitoring system) to execute image processing.
  • Device hosts for data processing operations, etc.
  • the method for identifying a dangerous action may be implemented by calling a computer-readable instruction stored in a memory by a processor. As shown in FIG. 1 , the method for identifying dangerous actions includes steps S11 to S13.
  • step S11 an image of the cabin is acquired.
  • step S12 occupant detection is performed on the vehicle cabin based on the image, and an occupant detection result of the vehicle cabin is obtained.
  • step S13 in response to the detection result of the occupant indicating that an occupant is detected, dangerous action recognition is performed based on the image and the position information of the occupant in the occupant detection result to obtain a dangerous action recognition result corresponding to the occupant,
  • the dangerous action refers to an action in which a predetermined body part sticks out of a vehicle window.
  • Embodiments of the present disclosure can be applied to any type of vehicle, such as passenger cars, taxis, online car-hailing, shared cars, and the like.
  • the embodiment of the present disclosure also does not limit the type of the vehicle, for example, it may be a compact type, an SUV (Sport Utility Vehicle, sport utility vehicle) or the like.
  • the image of the vehicle cabin can be acquired from the vehicle camera.
  • the vehicle-mounted camera may be any camera installed in the vehicle.
  • the number of on-vehicle cameras can be one or more than two. Vehicle cameras can be installed inside and/or outside the vehicle cabin.
  • the type of the vehicle-mounted camera can be a DMS camera, an OMS camera, an ordinary camera, and the like.
  • the image of the above-mentioned cabin can be an image of the cabin environment taken by a camera such as a DMS camera, an OMS camera, or an ordinary camera installed inside or outside the cabin, and the image at least includes the seating area of the people in the vehicle and the area of the window area.
  • Image information that is, at least a part of the area where people sit and at least a part of the window area need to be included within the viewing angle range of the above-mentioned camera.
  • human body detection and/or face detection may be performed on the vehicle cabin based on the image to obtain the human body detection result and/or face detection result of the vehicle cabin, and based on the The human body detection result and/or the human face detection result of the vehicle cabin obtains the occupant detection result of the vehicle cabin.
  • the human body detection result and/or face detection result of the vehicle cabin may be used as the occupant detection result of the vehicle cabin.
  • the human body detection result and/or face detection result of the vehicle cabin may be processed to obtain the occupant detection result of the vehicle cabin.
  • the occupant detection result when an occupant is detected, includes position information of the occupant.
  • the occupant detection result in the case of detecting one occupant, the occupant detection result includes the occupant's position information; in the case of multiple occupants detected, the occupant detection result may include the detected occupant's position information.
  • the occupant's position information may be represented by the coordinates of any point or multiple points of the occupant, and/or the occupant's position information may be represented by the occupant's bounding box
  • the location information is represented.
  • the position information of the occupant may include coordinates of a predicted point of the occupant's body center.
  • the human body center prediction point may represent a predicted human body center point.
  • a human body center point may be a point representing a position of a human body, and the number of body center points of any human body may be one.
  • the center point of the human body may be a pixel point where the center of gravity of the human body is located, or may be a pixel point where any key point of the human body is located.
  • the position information of the occupant may include the coordinates of the predicted point of the occupant's body center and the size of the body frame, where the size of the body frame may include the length and width of the body frame.
  • any human body center prediction point may be the geometric center of the body frame to which the human body center prediction point belongs.
  • the position information of the occupant may include position information of a body frame.
  • the position information of the body frame may include the coordinates of any vertex of the body frame and the size of the body frame; for another example, the position information of the body frame may include the coordinates of four vertices of the body frame.
  • the position information of the occupant includes the coordinates of the predicted point of the center of the human body in the image; based on the image, the occupant is detected in the cabin to obtain the location of the cabin
  • the occupant detection result includes: based on the first feature map corresponding to the image, predicting the probability that the pixel point in the image belongs to the center point of the human body; based on the probability that the pixel point in the image belongs to the center point of the human body, determining the The coordinates of the predicted point of the human body center in the image.
  • human body detection can be performed on the vehicle cabin based on the image to obtain the human body detection result of the vehicle cabin, and the occupant detection result of the vehicle cabin can be obtained based on the human body detection result of the vehicle cabin .
  • coordinates of a human body center prediction point in the image may be obtained.
  • the coordinates of the human body center prediction point in the image and the size of the human body frame to which the human body center prediction point belongs can be obtained.
  • the image may be input into a backbone network, and feature extraction is performed on the image through the backbone network to obtain a first feature map corresponding to the image.
  • the backbone network may adopt network structures such as ResNet and MobileNet, which are not limited here.
  • a predesigned first function may be used to perform feature extraction on the first image to obtain a first feature map corresponding to the image.
  • the first feature map may be input into the first prediction sub-network, and the probability that the pixel in the image belongs to the center point of the human body is predicted through the first prediction sub-network.
  • the first feature map may be processed by using a pre-designed second function to obtain the probability that the pixel in the image belongs to the center point of the human body.
  • the pixel can be determined as the predicted point of the center of the human body, that is, the pixel’s
  • the coordinates are determined as the coordinates of the predicted point of the center of the human body.
  • the first threshold may be 0.5.
  • those skilled in the art can flexibly set the size of the first threshold according to actual application scenario requirements, which is not limited here.
  • the pixel point can be determined as the predicted point of the human body center, that is, the coordinates of the pixel point can be determined as the coordinates of the predicted point of the human body center.
  • M is the maximum number of preset human body center prediction points, and M is greater than or equal to 1.
  • the dangerous action recognition is performed based on the coordinates of the human body center prediction point in the image obtained in this implementation manner, which helps to improve the accuracy of dangerous action recognition.
  • the predicting the probability that a pixel in the image belongs to the center point of the human body based on the first feature map corresponding to the image includes: determining the probability based on the first feature map corresponding to the image The coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point; the determination of the image based on the probability that the pixel point in the image belongs to the human body central point
  • the coordinates of the human body center prediction point in the image include: determining the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point.
  • the probability that the pixels of the image belong to the center point of the human body can be predicted, and the pixel points of the image can be selected from the pixels of the image.
  • a candidate point may represent a pixel point screened out from the image and having a higher probability of belonging to the center point of the human body.
  • each pixel in the image can be used as a first candidate point, or, in another example, human body detection can also be performed based on the first feature map, and the pixels in the detected human body frame point as the first candidate point.
  • the first candidate point can be determined as the human body center prediction point, that is, the coordinates of the first candidate point can be determined The coordinates of the predicted point for the center of the body.
  • the probability of any first candidate point belonging to the center point of the human body is greater than the first threshold, and the first candidate point is one of the M first candidate points with the highest probability of belonging to the center point of the human body in the image If one, the first candidate point can be determined as the predicted point of the human body center, that is, the coordinates of the first candidate point can be determined as the coordinates of the predicted point of the human body center.
  • the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point are determined, and based on the The coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point determine the coordinates of the human body center prediction point in the image, thereby improving the accuracy of the determined human body center prediction point.
  • the determining the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point based on the first feature map corresponding to the image includes: performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image; performing a maximum pooling operation based on the second feature map to obtain the first feature map of the center point of the human body in the image The coordinates of a candidate point and the probability that the first candidate point belongs to the center point of the human body.
  • one or more convolution operations may be performed on the first feature map to obtain the second feature map corresponding to the image.
  • a maximum pooling operation can be performed on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point, or the second feature map can be After the processing, the maximum pooling operation is performed to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the maximum pooling operation is performed to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the first candidate points can be accurately screened out from the pixel points of the image.
  • the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point
  • the method includes: performing activation processing on the second feature map to obtain a third feature map corresponding to the image; performing a maximum pooling operation on the third feature map to obtain a first candidate for a human center point in the image
  • the coordinates of the point and the probability that the first candidate point belongs to the center point of the human body For example, sigmoid processing may be performed on the second feature map to convert pixel values of the second feature map into values between 0 and 1.
  • other activation functions may also be used to activate the second feature map, which is not limited here.
  • the pixel value of the second feature map can be converted into a probability value, which can be used to represent the probability that the pixel point belongs to the center point of the human body.
  • performing the maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point includes: The third feature map performs an overlapping maximum pooling operation to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point; The coordinates of a candidate point and the probability that the first candidate point belongs to the center point of the human body, and determining the coordinates of the predicted point of the center of the human body in the image include: merging the first candidate points with the same coordinates to obtain the second The coordinates of the candidate point and the probability that the second candidate point belongs to the center point of the human body; according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the human body center prediction in the image The coordinates of the point.
  • the size of the pooling window is P ⁇ P
  • the step size is Q
  • P>Q where both P and Q are integers greater than or equal to 1.
  • P equals 3 and Q equals 1.
  • the second candidate point can be obtained by merging the first candidate points with the same coordinates, that is, the second candidate point can represent the merging result of the first candidate point.
  • the number of the second candidate points is less than or equal to the number of the first candidate points, the number of the second candidate points is greater than or equal to 1, and the second candidate points do not include candidate points with the same coordinates.
  • the accuracy of human body center point detection can be improved by performing an overlapping maximum pooling operation on the third feature map; by merging the first candidate points with the same coordinates, the coordinates of the second candidate point and the first The probability that the two candidate points belong to the center point of the human body, and according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the coordinates of the predicted point of the center of the human body in the image, thereby further improving the detection of the center point of the human body
  • the accuracy rate can improve the efficiency of subsequent dangerous action recognition.
  • the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point , including: performing a maximum pooling operation on the second feature map to obtain a fourth feature map; performing activation processing on the fourth feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the The probability that the first candidate point belongs to the center point of the human body.
  • dangerous action recognition in response to the occupant detection result indicating that an occupant is detected, dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result, and the dangerous action identification corresponding to the occupant is obtained. Result; in response to the occupant detection result indicating that multiple occupants have been detected, dangerous action recognition is performed based on the image and the position information of each occupant in the occupant detection result, and the corresponding dangerous action recognition result of each occupant is obtained. .
  • the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
  • Fig. 2 shows a schematic diagram of the occupant's head protruding out of the vehicle window in the image of the vehicle cabin in the method for identifying dangerous actions provided by an embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of a passenger's hand or arm sticking out of a vehicle window in an image of a vehicle cabin in the method for identifying a dangerous action provided by an embodiment of the present disclosure.
  • At least one of the occupant's actions of sticking out of the vehicle window and obtain the corresponding dangerous action recognition result of the occupant, so that the occupant can accurately identify at least one of the occupant's hand, arm, head, foot, and leg out of the vehicle window based on the occupant's position action, thereby improving the safety of the occupants in the cabin.
  • dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result to obtain the occupant's corresponding
  • the recognition result of the dangerous action includes: based on the image, identifying the dangerous action in the cabin environment, and obtaining the dangerous action prediction information corresponding to the cabin; in response to the occupant detection result indicating that an occupant is detected, based on the The location information of the occupant in the occupant detection result and the dangerous action prediction information are obtained to obtain the dangerous action recognition result corresponding to the occupant.
  • the first feature map corresponding to the image can be input into the second prediction sub-network, and the dangerous action prediction information corresponding to the vehicle cabin can be obtained through the second prediction sub-network.
  • a pre-designed third function may be used to process the first feature map corresponding to the image to obtain dangerous action prediction information corresponding to the vehicle cabin.
  • the dangerous action prediction information may include prediction information that the occupant corresponding to at least one position in the image takes a dangerous action.
  • the dangerous action prediction information may include the prediction information of the occupant's dangerous action corresponding to each pixel in the image.
  • the occupant's position information includes the coordinates of the human body center prediction point in the image
  • the dangerous action prediction information includes the occupant's action belonging to each of the N preset dangerous actions.
  • Probability wherein, N is an integer greater than or equal to 1; in response to the occupant detection result indicating that an occupant is detected, based on the occupant's position information in the occupant detection result and the dangerous action prediction information, obtain
  • the dangerous action identification result corresponding to the occupant includes: in response to the occupant detection result indicating that there is a human body center prediction point, determining that the occupant is detected; based on the coordinates of the human body center prediction point in the occupant detection result, The probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions is obtained from the action prediction information; according to the occupant's action corresponding to the human body center prediction point, it belongs to the N preset dangerous actions The probability of each of them is used to obtain the dangerous action recognition result corresponding to
  • the dangerous action prediction information may include the probability that the occupant's action corresponding to all or part of the pixels in the image belongs to each of the N preset dangerous actions.
  • the dangerous action prediction information may include the probability that the occupant's action corresponding to each pixel in the image belongs to each of the N preset dangerous actions.
  • the dangerous action prediction information may be an H ⁇ W ⁇ N feature map or a three-dimensional array, where H represents the height of the image, and W represents the width of the image.
  • the probability that the occupant corresponding to the human body center prediction point belongs to each of the N preset dangerous actions can be obtained from the dangerous action prediction information, so that the occupant corresponding to the human body center prediction point can be obtained The corresponding dangerous action recognition results.
  • the probability that the action of the occupant corresponding to any human body center prediction point belongs to any one of the N preset dangerous actions is greater than the second threshold, it can be determined that the occupant corresponding to the human body center prediction point corresponds to
  • the result of dangerous action recognition is that a dangerous action has occurred; if the probability of the occupant's action corresponding to any human body center prediction point belonging to each of the N preset dangerous actions is less than or equal to the second threshold, the human body center can be determined.
  • the dangerous action recognition result corresponding to the occupant corresponding to the prediction point is that no dangerous action has occurred.
  • the second threshold may be equal to 0.5.
  • the dangerous action recognition result corresponding to the occupant corresponding to each human body center prediction point in the image can be accurately determined.
  • the identifying dangerous actions in the cabin environment based on the image, and obtaining the dangerous action prediction information corresponding to the cabin includes: a first feature map corresponding to the image After the convolution operation and the full connection operation, the classification operation is performed to obtain the dangerous action prediction information corresponding to the cabin.
  • the convolution operation can be performed on the first feature map to obtain the fifth feature map; the full connection operation can be performed on the fifth feature map to obtain the sixth feature map; the sixth feature map can be classified to obtain Dangerous action prediction information corresponding to the cabin.
  • one or more convolution operations may be performed on the first feature map to obtain the fifth feature map. For example, two convolution operations may be performed on the first feature map to obtain the fifth feature map.
  • One or more than two full connection operations can be performed on the fifth feature map to obtain the sixth feature map.
  • a full connection operation can be performed on the fifth feature map to obtain the sixth feature map.
  • Those skilled in the art can flexibly determine the number of convolution operations and the number of full connection operations according to the requirements of actual application scenarios, which are not limited here.
  • by performing a convolution operation on the first feature map deeper features of the image can be extracted, and the resulting fifth feature map can more accurately represent the features of dangerous actions in the image;
  • the fitting ability of the network can be improved, so that the accuracy of the obtained dangerous action prediction information can be improved.
  • the human body center point positioning branch obtains the second feature map through further convolution operation on the first feature map, and performs maximum pooling operation and activation processing on the basis of the second feature map to obtain the coordinates of the human body center point; danger
  • the action branch performs convolution operation, full connection, and classification on the first feature map to obtain the action category information corresponding to each human body center point, so that the dangerous action detection results of each occupant in the cabin can be obtained.
  • human body center point positioning and dangerous action classification share the same feature extraction network, which is conducive to improving the reliability of the results and saving computing resources.
  • the method further includes: in response to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, based on the image and the occupant detection result, The position information of the occupant is used for dangerous motion recognition, and the dangerous motion recognition result corresponding to the occupant is obtained.
  • occupants detected in the front seating area of the vehicle cabin may include the driver and/or co-driver.
  • this implementation method responds to the occupant detection result indicating that the occupant detected in the front seat area of the cabin occupant, and perform dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtain the corresponding dangerous action recognition result of the occupant, thereby helping to improve the driving safety of the driver.
  • the method further includes: indicating that the occupant's action includes any preset action in response to the dangerous action identification result. Dangerous action, issue a warning message.
  • a prompt message can be issued, thereby enabling a safety warning, thereby helping to improve safety in the cabin. occupant safety.
  • the sending out prompt information includes at least one of the following: controlling the voice interaction device in the vehicle to send out voice prompt information; command to lower; issue a command to turn on the double flashing lights.
  • the voice interaction device in the vehicle in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, the voice interaction device in the vehicle may be controlled to issue a voice prompt.
  • the voice interaction device in the car can be controlled to issue a voice prompt message of "Please do not stick out the window".
  • the occupant can be prompted by voice, so that even if the occupant does not look at the display screen of the vehicle, the occupant can obtain the prompt information.
  • an instruction to control the raising or lowering of the window corresponding to the occupant who has the preset dangerous action may be issued .
  • the window corresponding to the occupant who has the preset dangerous action can be the window on the left side of the front row; if the occupant who has the preset dangerous action If the passenger is the co-driver, the window corresponding to the occupant who has the preset dangerous action can be the window on the right side of the front row; The window corresponding to the occupant of the preset dangerous action may be the window on the left side of the rear row;
  • the windows of the car can be the windows on the right side of the rear row.
  • the effect of the reminder can be strengthened by issuing an instruction to control the raising or lowering of the window corresponding to the occupant who has the preset dangerous action, which helps the occupant to subconsciously Retract body parts sticking out of the car window.
  • an instruction to turn on the double flashing lights can be issued, which can have the effect of reminding nearby vehicles, thereby enabling Improve the safety of occupants in the cabin.
  • Fig. 4 shows a schematic diagram of an application scenario of the method for identifying a dangerous action provided by the present disclosure.
  • an image of the cabin of a vehicle may be acquired.
  • the size of an image of a car cabin may be 640x480.
  • the image can be input into the backbone network, and feature extraction is performed on the image via the backbone network to obtain a first feature map, where the size of the first feature map can be 80 ⁇ 60 ⁇ C, where C represents the first feature map
  • the number of channels, C can be greater than or equal to 3.
  • the first feature map may be convoluted through the first prediction sub-network to obtain the second feature map, where the size of the second feature map may be 80 ⁇ 60 ⁇ 3.
  • the 0th channel of the second feature map can be activated through the sigmoid function to obtain the third feature map.
  • a maximum pooling operation with a pooling window size of 3 ⁇ 3 and a step size of 1 can be performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belonging to The probability of the center point of the human body, wherein the number of the first candidate points may be 60 ⁇ 80.
  • the first candidate points with the same coordinates may be combined to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body.
  • the second candidate point whose probability of belonging to the human body center point is greater than 0.5 may be determined as the body center prediction point.
  • the length of the human body frame corresponding to the human body center prediction point can be obtained from the first channel of the second feature map, and the width of the human body frame corresponding to the human body center prediction point can be obtained from the second channel of the second feature map.
  • the first feature map can be input into the second prediction sub-network, and the first feature map is subjected to a convolution operation and a full-connection operation through the second prediction sub-network, and then a classification operation is performed to obtain the dangerous action prediction information corresponding to the cabin, wherein
  • the dangerous action prediction information may include the probability that the occupant's action corresponding to each pixel in the image belongs to each of the N preset dangerous actions.
  • the dangerous action prediction information may be a 640 ⁇ 480 ⁇ N feature map or a three-dimensional array.
  • the probability that the occupant corresponding to the human body center prediction point belongs to each of the N preset dangerous actions can be obtained from the dangerous action prediction information, so that the occupant corresponding to the human body center prediction point can be obtained The corresponding dangerous action recognition results.
  • the present disclosure also provides a dangerous action identification device, electronic equipment, computer-readable storage medium, and program, all of which can be used to implement any of the dangerous action identification methods provided in the present disclosure.
  • a dangerous action identification device electronic equipment, computer-readable storage medium, and program, all of which can be used to implement any of the dangerous action identification methods provided in the present disclosure.
  • the corresponding technical solutions and technical effects can be found in The corresponding records in the method part will not be repeated here.
  • Fig. 5 shows a block diagram of an apparatus for identifying a dangerous action provided by an embodiment of the present disclosure.
  • the identification device of the dangerous action includes:
  • Obtaining module 51 for obtaining the image of cabin
  • the occupant detection module 52 is configured to perform occupant detection on the vehicle cabin based on the image, and obtain an occupant detection result of the vehicle cabin;
  • the first dangerous action identification module 53 is configured to respond to the occupant detection result indicating that an occupant is detected, perform dangerous action identification based on the image and the occupant's position information in the occupant detection result, and obtain the occupant's corresponding A dangerous action recognition result, wherein the dangerous action represents an action in which a predetermined body part sticks out of a car window.
  • the first dangerous action recognition module 53 is configured to:
  • a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
  • the position information of the occupant includes the coordinates of a human body center prediction point in the image
  • the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
  • the first dangerous action recognition module 53 is used for:
  • the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
  • the occupant's position information includes the coordinates of a human body center prediction point in the image
  • the occupant detection module 52 is used for:
  • the coordinates of the predicted point of the center of the human body in the image are determined.
  • the occupant detection module 52 is used for:
  • the coordinates of the predicted point of the human body center in the image are determined.
  • the occupant detection module 52 is used for:
  • a maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the occupant detection module 52 is used for:
  • a maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  • the occupant detection module 52 is used for:
  • the first dangerous action recognition module 53 is configured to:
  • the device further includes:
  • the second dangerous action recognition module is configured to respond to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, and perform dangerous actions based on the image and the occupant's position information in the occupant detection result. Action recognition, obtaining the dangerous action recognition result corresponding to the occupant.
  • the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
  • the device further includes:
  • a prompting module configured to issue prompting information in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action.
  • the prompt module is used for at least one of the following:
  • the occupant detection is performed on the vehicle cabin, the occupant detection result of the vehicle cabin is obtained, and the occupant is detected in response to the occupant detection result performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action recognition result corresponding to the occupant, wherein the dangerous action represents a predetermined body part sticking out of the window Therefore, based on the position of the occupant, the occupant's action of extending the preset body part out of the vehicle window can be accurately recognized, thereby improving the safety of the occupant in the vehicle cabin.
  • the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and its specific implementation and technical effects can refer to the descriptions of the above method embodiments, for It is concise and will not be repeated here.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
  • An embodiment of the present disclosure also proposes a computer program, including computer readable codes.
  • a processor in the electronic device executes the above method.
  • An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in an electronic device , the processor in the electronic device executes the above method.
  • An embodiment of the present disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to call the executable instructions stored in the memory instruction to perform the above method.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as wireless networks (Wi-Fi), second-generation mobile communication technologies (2G), third-generation mobile communication technologies (3G), fourth-generation mobile communication technologies (4G )/long-term evolution (LTE) of universal mobile communication technology, fifth generation mobile communication technology (5G) or their combination.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on the operating system stored in the memory 1932, such as the Microsoft server operating system (Windows Server TM ), the graphical user interface-based operating system (Mac OS X TM ) introduced by Apple Inc., and the multi-user and multi-process computer operating system (Unix TM ), a free and open source Unix-like operating system (Linux TM ), an open source Unix-like operating system (FreeBSD TM ), or the like.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface-based operating system
  • Unix TM multi-user and multi-process computer operating system
  • Linux TM free and open source Unix-like operating system
  • FreeBSD TM open source Unix-like operating system
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA)
  • FPGA field programmable gate array
  • PDA programmable logic array
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically realized by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. Wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The present disclosure relates to a dangerous action identifying method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring an image of a cabin; performing occupant detection on the cabin on the basis of the image to obtain an occupant detection result of the cabin; and in response to the occupant detection result indicating that an occupant is detected, performing dangerous action identification on the basis of the image and location information of the occupant in the occupant detection result to obtain a dangerous action identification result corresponding to the occupant, wherein a dangerous action represents an action that a preset body part extends out of a vehicle window.

Description

危险动作的识别方法及装置、电子设备和存储介质Method and device for identifying dangerous actions, electronic equipment and storage medium
本申请要求在2021年6月30日提交中国专利局、申请号为202110735201.6、申请名称为“危险动作的识别方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on June 30, 2021, with the application number 202110735201.6, and the title of the application is "Method and device for identifying dangerous actions, electronic equipment and storage media", the entire content of which is passed References are incorporated in this application.
技术领域technical field
本公开涉及智能车舱技术领域,尤其涉及一种危险动作的识别方法及装置、电子设备和存储介质。The present disclosure relates to the technical field of smart car cabins, and in particular to a method and device for identifying dangerous actions, electronic equipment, and a storage medium.
背景技术Background technique
目前,汽车电子行业发展迅速,为人们乘车提供了方便舒适的车舱环境。车舱智能化是当前汽车行业发展的重要方向。其中,车舱智能化包括多模交互、个性化服务、安全感知等方面的智能化。在安全感知方面,智能车载旨在为乘员提供安全的车舱环境。对车舱内的人员进行危险动作的识别,对于车舱内的乘员安全具有重要意义。At present, the automotive electronics industry is developing rapidly, providing a convenient and comfortable cabin environment for people to ride. Cabin intelligence is an important direction for the current development of the automotive industry. Among them, the intelligence of the cabin includes the intelligence of multi-mode interaction, personalized service, and safety perception. In terms of safety perception, smart vehicles aim to provide occupants with a safe cabin environment. Identifying dangerous actions of people in the cabin is of great significance to the safety of the occupants in the cabin.
发明内容Contents of the invention
本公开提供了一种危险动作的识别技术方案。The present disclosure provides a technical solution for identifying dangerous actions.
根据本公开的一方面,提供了一种危险动作的识别方法,包括:According to an aspect of the present disclosure, a method for identifying a dangerous action is provided, including:
获取车舱的图像;Get an image of the cabin;
基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果;Based on the image, perform occupant detection on the cabin to obtain occupant detection results in the cabin;
响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。Responding to the occupant detection result indicating that an occupant is detected, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action identification result corresponding to the occupant, wherein the dangerous Motion represents the motion of a preset body part sticking out of the car window.
在一种可能的实现方式中,所述响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,包括:In a possible implementation manner, in response to the occupant detection result indicating that an occupant is detected, dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result to obtain the occupant's corresponding The results of dangerous action recognition, including:
基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息;Identifying dangerous actions in the cabin environment based on the image, and obtaining dangerous action prediction information corresponding to the cabin;
响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果。In response to the occupant detection result indicating that an occupant is detected, a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标,所述危险动作预测信息包括乘员的动作属于N种预设危险动作中的每一种的概率,其中,N为大于或等于1的整数;In a possible implementation manner, the position information of the occupant includes the coordinates of a human body center prediction point in the image, and the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
所述响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果,包括:In response to the occupant detection result indicating that an occupant has been detected, obtaining a dangerous action recognition result corresponding to the occupant based on the occupant's position information in the occupant detection result and the dangerous action prediction information includes:
响应于所述乘员检测结果指示存在人体中心预测点,确定检测到乘员;determining that an occupant is detected in response to the occupant detection result indicating that there is a human body center prediction point;
基于所述乘员检测结果中的人体中心预测点的坐标,从所述危险动作预测信息中获得所述人体中 心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率;Based on the coordinates of the human body center prediction point in the occupant detection result, obtain the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions from the dangerous action prediction information;
根据所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率,得到所述人体中心预测点对应的乘员对应的危险动作识别结果。According to the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions, the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标;In a possible implementation manner, the occupant's position information includes the coordinates of a human body center prediction point in the image;
所述基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,包括:Based on the image, performing occupant detection on the cabin to obtain occupant detection results in the cabin, including:
基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率;Based on the first feature map corresponding to the image, predicting the probability that the pixel in the image belongs to the center point of the human body;
基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Based on the probability that the pixel point in the image belongs to the center point of the human body, the coordinates of the predicted point of the center of the human body in the image are determined.
在一种可能的实现方式中,所述基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率,包括:基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;In a possible implementation manner, the predicting the probability that a pixel in the image belongs to a human center point based on the first feature map corresponding to the image includes: based on the first feature map corresponding to the image, determining the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point;
所述基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。The determining the coordinates of the human body center prediction point in the image based on the probability that the pixel point in the image belongs to the human body center point includes: based on the coordinates of the first candidate point and the fact that the first candidate point belongs to the human body The probability of the center point determines the coordinates of the predicted point of the center of the human body in the image.
在一种可能的实现方式中,所述基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:In a possible implementation manner, based on the first feature map corresponding to the image, the coordinates of the first candidate point of the human body center point in the image and the coordinates of the first candidate point belonging to the human body center point are determined. Probability, including:
对所述图像对应的第一特征图进行卷积操作,得到所述图像对应的第二特征图;performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image;
基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:In a possible implementation manner, the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belongs to the human body center Point probabilities, including:
对所述第二特征图进行激活处理,得到所述图像对应的第三特征图;performing activation processing on the second feature map to obtain a third feature map corresponding to the image;
对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述第三特征图进行重叠的最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;In a possible implementation manner, the maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belongs to the human body center Point probability, including: performing an overlapping maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point ;
所述基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率;根据所述第二候选点的坐标以及所述第二候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。The determining the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point includes: for the first candidate point with the same coordinates Merge to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body; according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the The coordinates of the predicted point of the human body center in the image.
在一种可能的实现方式中,所述基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息,包括:In a possible implementation manner, the identifying dangerous actions in the cabin environment based on the image, and obtaining the dangerous action prediction information corresponding to the cabin includes:
对所述图像对应的第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息。Performing a convolution operation and a full connection operation on the first feature map corresponding to the image, and then performing a classification operation to obtain dangerous action prediction information corresponding to the cabin.
在一种可能的实现方式中,所述方法还包括:In a possible implementation, the method further includes:
响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果。Responding to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining the occupant's corresponding Dangerous action recognition results.
在一种可能的实现方式中,所述预设身体部位包括以下至少之一:手、胳膊、头、脚、腿。In a possible implementation manner, the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
在一种可能的实现方式中,在所述得到所述乘员对应的危险动作识别结果之后,所述方法还包括:In a possible implementation manner, after obtaining the dangerous action recognition result corresponding to the occupant, the method further includes:
响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息。In response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, a prompt message is issued.
在一种可能的实现方式中,所述发出提示信息,包括以下至少一项:In a possible implementation manner, the sending of prompt information includes at least one of the following:
控制车内的语音交互装置发出语音提示信息;Control the voice interaction device in the car to issue voice prompt information;
发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令;Issue an instruction to control the raising or lowering of the vehicle window corresponding to the occupant who has the preset dangerous action;
发出双闪灯开启指令。Issue a command to turn on the double flash.
根据本公开的一方面,提供了一种危险动作的识别装置,包括:According to an aspect of the present disclosure, a device for identifying dangerous actions is provided, including:
获取模块,用于获取车舱的图像;The acquisition module is used to acquire the image of the cabin;
乘员检测模块,用于基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果;An occupant detection module, configured to perform occupant detection on the vehicle cabin based on the image, and obtain an occupant detection result of the vehicle cabin;
第一危险动作识别模块,用于响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。The first dangerous action identification module is configured to respond to the occupant detection result indicating that an occupant is detected, perform dangerous action identification based on the image and the occupant's position information in the occupant detection result, and obtain the corresponding danger of the occupant Action recognition results, wherein the dangerous action represents an action in which a predetermined body part sticks out of a car window.
在一种可能的实现方式中,所述第一危险动作识别模块用于:In a possible implementation manner, the first dangerous action recognition module is configured to:
基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息;Identifying dangerous actions in the cabin environment based on the image, and obtaining dangerous action prediction information corresponding to the cabin;
响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果。In response to the occupant detection result indicating that an occupant is detected, a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标,所述危险动作预测信息包括乘员的动作属于N种预设危险动作中的每一种的概率,其中,N为大于或等于1的整数;In a possible implementation manner, the position information of the occupant includes the coordinates of a human body center prediction point in the image, and the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
所述第一危险动作识别模块用于:The first dangerous action recognition module is used for:
响应于所述乘员检测结果指示存在人体中心预测点,确定检测到乘员;determining that an occupant is detected in response to the occupant detection result indicating that there is a human body center prediction point;
基于所述乘员检测结果中的人体中心预测点的坐标,从所述危险动作预测信息中获得所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率;Based on the coordinates of the human body center prediction point in the occupant detection result, obtain from the dangerous action prediction information the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions;
根据所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率,得到所述人体中心预测点对应的乘员对应的危险动作识别结果。According to the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions, the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标;In a possible implementation manner, the occupant's position information includes the coordinates of a human body center prediction point in the image;
所述乘员检测模块用于:The occupant detection module is used for:
基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率;Based on the first feature map corresponding to the image, predicting the probability that the pixel in the image belongs to the center point of the human body;
基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Based on the probability that the pixel point in the image belongs to the center point of the human body, the coordinates of the predicted point of the center of the human body in the image are determined.
在一种可能的实现方式中,所述乘员检测模块用于:In a possible implementation manner, the occupant detection module is used for:
基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;Based on the first feature map corresponding to the image, determine the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point;
基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体 中心预测点的坐标。Determine the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述乘员检测模块用于:In a possible implementation manner, the occupant detection module is used for:
对所述图像对应的第一特征图进行卷积操作,得到所述图像对应的第二特征图;performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image;
基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述乘员检测模块用于:In a possible implementation manner, the occupant detection module is used for:
对所述第二特征图进行激活处理,得到所述图像对应的第三特征图;performing activation processing on the second feature map to obtain a third feature map corresponding to the image;
对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述乘员检测模块用于:In a possible implementation manner, the occupant detection module is used for:
对所述第三特征图进行重叠的最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;Performing an overlapping maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point;
对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率;Merging the first candidate points with the same coordinates to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body;
根据所述第二候选点的坐标以及所述第二候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Determine the coordinates of the human body center prediction point in the image according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the human body center point.
在一种可能的实现方式中,所述第一危险动作识别模块用于:In a possible implementation manner, the first dangerous action recognition module is configured to:
对所述图像对应的第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息。Performing a convolution operation and a full connection operation on the first feature map corresponding to the image, and then performing a classification operation to obtain dangerous action prediction information corresponding to the cabin.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the device further includes:
第二危险动作识别模块,用于响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果。The second dangerous action recognition module is configured to respond to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, and perform dangerous actions based on the image and the occupant's position information in the occupant detection result. Action recognition, obtaining the dangerous action recognition result corresponding to the occupant.
在一种可能的实现方式中,所述预设身体部位包括以下至少之一:手、胳膊、头、脚、腿。In a possible implementation manner, the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the device further includes:
提示模块,用于响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息。A prompting module, configured to issue prompting information in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action.
在一种可能的实现方式中,所述提示模块用于以下至少一项:In a possible implementation manner, the prompt module is used for at least one of the following:
控制车内的语音交互装置发出语音提示信息;Control the voice interaction device in the car to issue voice prompt information;
发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令;Issue an instruction to control the raising or lowering of the vehicle window corresponding to the occupant who has the preset dangerous action;
发出双闪灯开启指令。Issue a command to turn on the double flash.
根据本公开的一方面,提供了一种电子设备,包括:一个或多个处理器;用于存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行上述方法。According to an aspect of the present disclosure, there is provided an electronic device, comprising: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to call the memory storage executable instructions to perform the above method.
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。According to one aspect of the present disclosure, there is provided a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
根据本公开的一方面,提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机 可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。According to an aspect of the present disclosure, there is provided a computer program product, including computer readable codes, or a non-volatile computer readable storage medium bearing computer readable codes, when the computer readable codes are stored in an electronic device During operation, the processor in the electronic device executes the above method.
在本公开实施例中,通过获取车舱的图像,基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,并响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作,由此能够基于乘员的位置准确地识别乘员将预设身体部位伸出车窗外的动作,从而能够提高车舱内的乘员的安全性。In the embodiment of the present disclosure, by acquiring the image of the vehicle cabin, based on the image, the occupant detection is performed on the vehicle cabin, the occupant detection result of the vehicle cabin is obtained, and the occupant is detected in response to the occupant detection result performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action recognition result corresponding to the occupant, wherein the dangerous action represents a predetermined body part sticking out of the window Therefore, based on the position of the occupant, the occupant's action of extending the preset body part out of the vehicle window can be accurately recognized, thereby improving the safety of the occupant in the vehicle cabin.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments with reference to the accompanying drawings.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings here are incorporated into the description and constitute a part of the present description. These drawings show embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure.
图1示出本公开实施例提供的危险动作的识别方法的流程图。FIG. 1 shows a flow chart of a method for identifying a dangerous action provided by an embodiment of the present disclosure.
图2示出本公开实施例提供的危险动作的识别方法中车舱的图像中乘员的头部伸出车窗外的示意图。Fig. 2 shows a schematic diagram of the occupant's head protruding out of the vehicle window in the image of the vehicle cabin in the method for identifying dangerous actions provided by an embodiment of the present disclosure.
图3示出本公开实施例提供的危险动作的识别方法中车舱的图像中乘员的手或者胳膊伸出车窗外的示意图。FIG. 3 shows a schematic diagram of a passenger's hand or arm sticking out of a vehicle window in an image of a vehicle cabin in the method for identifying a dangerous action provided by an embodiment of the present disclosure.
图4示出本公开提供的危险动作的识别方法的应用场景的示意图。Fig. 4 shows a schematic diagram of an application scenario of the method for identifying a dangerous action provided by the present disclosure.
图5示出本公开实施例提供的危险动作的识别装置的框图。Fig. 5 shows a block diagram of an apparatus for identifying a dangerous action provided by an embodiment of the present disclosure.
图6示出本公开实施例提供的一种电子设备800的框图。FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
图7示出本公开实施例提供的一种电子设备1900的框图。FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
具体实施方式detailed description
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present disclosure may be practiced without some of the specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art have not been described in detail so as to obscure the gist of the present disclosure.
在车辆行驶的过程中,驾驶员或者其他乘员有可能出现将手、头部或者其他身体部位伸出车窗外的危险动作,这将可能导致严重的事故。During the driving of the vehicle, the driver or other occupants may make dangerous actions of sticking their hands, head or other body parts out of the vehicle window, which may lead to serious accidents.
本公开实施例提供了一种危险动作的识别方法及装置、电子设备和存储介质,通过获取车舱的图像,基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,并响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作,由此能够基于乘员的位置准确地识别乘员将预设身体部位伸出车窗外的动作,从而能够提高车舱内的乘员的安全性。Embodiments of the present disclosure provide a method and device for identifying dangerous actions, electronic equipment, and a storage medium. By acquiring an image of a vehicle cabin, and based on the image, perform occupant detection on the vehicle cabin, and obtain the occupant of the vehicle cabin detection result, and in response to the occupant detection result indicating that an occupant is detected, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining the occupant's corresponding dangerous action recognition result, wherein , the dangerous action represents an action in which a predetermined body part sticks out of the vehicle window, thereby accurately identifying the movement of the occupant to extend the predetermined body part out of the vehicle window based on the position of the occupant, thereby improving the safety of the occupant in the vehicle cabin sex.
下面结合附图对本公开实施例提供的危险动作的识别方法进行详细的说明。The method for identifying dangerous actions provided by the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
图1示出本公开实施例提供的危险动作的识别方法的流程图。在一种可能的实现方式中,所述危险动作的识别方法可以由终端设备或服务器或其它处理设备执行。其中,终端设备可以是车载设备、用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备或者可穿戴设备等。其中,车载设备可以是车舱内的车机、域控制器或者处理器,还可以是DMS(Driver Monitor System,驾驶员监控***)或者OMS(Occupant Monitoring System,乘员监控***)中用于执行图像等数据处理操作的设备主机等。在一些可能的实现方式中,所述危险动作的识别方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。如图1所示,所述危险动作的识别方法包括步骤S11至步骤S13。FIG. 1 shows a flow chart of a method for identifying a dangerous action provided by an embodiment of the present disclosure. In a possible implementation manner, the method for identifying a dangerous action may be executed by a terminal device or a server or other processing device. Wherein, the terminal device may be a vehicle-mounted device, a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, or a Wearable equipment etc. Among them, the on-board device can be the car machine, domain controller or processor in the cabin, and can also be used in DMS (Driver Monitor System, driver monitoring system) or OMS (Occupant Monitoring System, occupant monitoring system) to execute image processing. Device hosts for data processing operations, etc. In some possible implementation manners, the method for identifying a dangerous action may be implemented by calling a computer-readable instruction stored in a memory by a processor. As shown in FIG. 1 , the method for identifying dangerous actions includes steps S11 to S13.
在步骤S11中,获取车舱的图像。In step S11, an image of the cabin is acquired.
在步骤S12中,基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果。In step S12, occupant detection is performed on the vehicle cabin based on the image, and an occupant detection result of the vehicle cabin is obtained.
在步骤S13中,响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。In step S13, in response to the detection result of the occupant indicating that an occupant is detected, dangerous action recognition is performed based on the image and the position information of the occupant in the occupant detection result to obtain a dangerous action recognition result corresponding to the occupant, Wherein, the dangerous action refers to an action in which a predetermined body part sticks out of a vehicle window.
本公开实施例可以应用于任意类型的车辆,例如乘用车、出租车、网约车、共享汽车等等。本公开实施例也不对车辆的车型进行限定,例如可以是紧凑型、SUV(Sport Utility Vehicle,运动型多用途汽车)等等。在本公开实施例中,可以从车载摄像头获取车舱的图像。其中,车载摄像头可以是设置于车辆的任意摄像头。车载摄像头的数量可以是一个或两个以上。车载摄像头可以安装于车舱内和/或车舱外。车载摄像头的类型可以为DMS摄像头、OMS摄像头、普通摄像头等等。Embodiments of the present disclosure can be applied to any type of vehicle, such as passenger cars, taxis, online car-hailing, shared cars, and the like. The embodiment of the present disclosure also does not limit the type of the vehicle, for example, it may be a compact type, an SUV (Sport Utility Vehicle, sport utility vehicle) or the like. In the embodiment of the present disclosure, the image of the vehicle cabin can be acquired from the vehicle camera. Wherein, the vehicle-mounted camera may be any camera installed in the vehicle. The number of on-vehicle cameras can be one or more than two. Vehicle cameras can be installed inside and/or outside the vehicle cabin. The type of the vehicle-mounted camera can be a DMS camera, an OMS camera, an ordinary camera, and the like.
上述车舱的图像可以是DMS摄像头、OMS摄像头、普通摄像头等设置在车舱内或车舱外的摄像头拍摄的车舱环境的图像,该图像至少包含了车内人员落座区域和车窗区域的影像信息,也即,上述摄像头的视角范围内需包含至少一部分人员落座区域和至少一部分车窗区域。The image of the above-mentioned cabin can be an image of the cabin environment taken by a camera such as a DMS camera, an OMS camera, or an ordinary camera installed inside or outside the cabin, and the image at least includes the seating area of the people in the vehicle and the area of the window area. Image information, that is, at least a part of the area where people sit and at least a part of the window area need to be included within the viewing angle range of the above-mentioned camera.
在本公开实施例中,可以基于所述图像,对所述车舱进行人体检测和/或人脸检测,得到所述车舱的人体检测结果和/或人脸检测结果,并可以基于所述车舱的人体检测结果和/或人脸检测结果得到所述车舱的乘员检测结果。例如,可以将所述车舱的人体检测结果和/或人脸检测结果作为所述车舱的乘员检测结果。又如,可以对所述车舱的人体检测结果和/或人脸检测结果进行处理后得到所述车舱的乘员检测结果。In an embodiment of the present disclosure, human body detection and/or face detection may be performed on the vehicle cabin based on the image to obtain the human body detection result and/or face detection result of the vehicle cabin, and based on the The human body detection result and/or the human face detection result of the vehicle cabin obtains the occupant detection result of the vehicle cabin. For example, the human body detection result and/or face detection result of the vehicle cabin may be used as the occupant detection result of the vehicle cabin. As another example, the human body detection result and/or face detection result of the vehicle cabin may be processed to obtain the occupant detection result of the vehicle cabin.
在本公开实施例中,在检测到乘员的情况下,所述乘员检测结果包括乘员的位置信息。例如,在 检测到一个乘员的情况下,所述乘员检测结果包括该乘员的位置信息;在检测到多个乘员的情况下,所述乘员检测结果可以包括检测到的各个乘员的位置信息。In an embodiment of the present disclosure, when an occupant is detected, the occupant detection result includes position information of the occupant. For example, in the case of detecting one occupant, the occupant detection result includes the occupant's position information; in the case of multiple occupants detected, the occupant detection result may include the detected occupant's position information.
在本公开实施例中,所述乘员的位置信息可以采用所述乘员的任意一个点或任意多个点的坐标来表示,和/或,所述乘员的位置信息可以采用所述乘员的边界框的位置信息来表示。在一种可能的实现方式中,所述乘员的位置信息可以包括所述乘员的人体中心预测点的坐标。其中,人体中心预测点可以表示预测的人体中心点。人体中心点可以是用于代表人体的位置的点,任一人体的人体中心点的数量可以是一个。例如,人体中心点可以是人体的重心所在的像素点,也可以是人体的任意一个关键点所在的像素点。在另一种可能的实现方式中,所述乘员的位置信息可以包括所述乘员的人体中心预测点的坐标和人体框的尺寸,其中,人体框的尺寸可以包括人体框的长度和宽度。在该实现方式中,任一人体中心预测点可以是该人体中心预测点所属的人体框的几何中心。在另一种可能的实现方式中,所述乘员的位置信息可以包括人体框的位置信息。例如,人体框的位置信息可以包括人体框的任意一个顶点的坐标和人体框的尺寸;又如,人体框的位置信息可以包括人体框的四个顶点的坐标。In the embodiment of the present disclosure, the occupant's position information may be represented by the coordinates of any point or multiple points of the occupant, and/or the occupant's position information may be represented by the occupant's bounding box The location information is represented. In a possible implementation manner, the position information of the occupant may include coordinates of a predicted point of the occupant's body center. Wherein, the human body center prediction point may represent a predicted human body center point. A human body center point may be a point representing a position of a human body, and the number of body center points of any human body may be one. For example, the center point of the human body may be a pixel point where the center of gravity of the human body is located, or may be a pixel point where any key point of the human body is located. In another possible implementation manner, the position information of the occupant may include the coordinates of the predicted point of the occupant's body center and the size of the body frame, where the size of the body frame may include the length and width of the body frame. In this implementation manner, any human body center prediction point may be the geometric center of the body frame to which the human body center prediction point belongs. In another possible implementation manner, the position information of the occupant may include position information of a body frame. For example, the position information of the body frame may include the coordinates of any vertex of the body frame and the size of the body frame; for another example, the position information of the body frame may include the coordinates of four vertices of the body frame.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标;所述基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,包括:基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率;基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。In a possible implementation manner, the position information of the occupant includes the coordinates of the predicted point of the center of the human body in the image; based on the image, the occupant is detected in the cabin to obtain the location of the cabin The occupant detection result includes: based on the first feature map corresponding to the image, predicting the probability that the pixel point in the image belongs to the center point of the human body; based on the probability that the pixel point in the image belongs to the center point of the human body, determining the The coordinates of the predicted point of the human body center in the image.
在该实现方式中,可以基于所述图像对所述车舱进行人体检测,得到所述车舱的人体检测结果,并可以基于所述车舱的人体检测结果得到所述车舱的乘员检测结果。例如,基于所述图像对所述车舱进行人体检测,可以得到所述图像中的人体中心预测点的坐标。又如,基于所述图像对所述车舱进行人体检测,可以得到所述图像中的人体中心预测点的坐标和人体中心预测点所属的人体框的尺寸。In this implementation, human body detection can be performed on the vehicle cabin based on the image to obtain the human body detection result of the vehicle cabin, and the occupant detection result of the vehicle cabin can be obtained based on the human body detection result of the vehicle cabin . For example, by performing human body detection on the vehicle cabin based on the image, coordinates of a human body center prediction point in the image may be obtained. As another example, by performing human body detection on the vehicle cabin based on the image, the coordinates of the human body center prediction point in the image and the size of the human body frame to which the human body center prediction point belongs can be obtained.
作为该实现方式的一个示例,可以将所述图像输入骨干网络,经由骨干网络对所述图像进行特征提取,得到所述图像对应的第一特征图。其中,骨干网络可以采用ResNet、MobileNet等网络结构,在此不做限定。作为该实现方式的另一个示例,可以采用预先设计的第一函数对所述第一图像进行特征提取,得到所述图像对应的第一特征图。As an example of this implementation manner, the image may be input into a backbone network, and feature extraction is performed on the image through the backbone network to obtain a first feature map corresponding to the image. Wherein, the backbone network may adopt network structures such as ResNet and MobileNet, which are not limited here. As another example of this implementation manner, a predesigned first function may be used to perform feature extraction on the first image to obtain a first feature map corresponding to the image.
作为该实现方式的一个示例,可以将第一特征图输入第一预测子网络,经由第一预测子网络预测所述图像中的像素点属于人体中心点的概率。作为该实现方式的另一个示例,可以采用预先设计的第二函数对第一特征图进行处理,得到所述图像中的像素点属于人体中心点的概率。As an example of this implementation, the first feature map may be input into the first prediction sub-network, and the probability that the pixel in the image belongs to the center point of the human body is predicted through the first prediction sub-network. As another example of this implementation manner, the first feature map may be processed by using a pre-designed second function to obtain the probability that the pixel in the image belongs to the center point of the human body.
作为该实现方式的一个示例,若所述图像中的任一像素点属于人体中心点的概率大于第一阈值,则可以将该像素点确定为人体中心预测点,即,可以将该像素点的坐标确定为人体中心预测点的坐标。例如,第一阈值可以为0.5。当然,本领域技术人员可以根据实际应用场景需求灵活设置第一阈值的大小,在此不做限定。作为该实现方式的另一个示例,若所述图像中的任一像素点属于人体中心点的概率大于第一阈值,且该像素点为所述图像中属于人体中心点的概率最大的M个像素点之一,则可以将该像素点确定为人体中心预测点,即,可以将该像素点的坐标确定为人体中心预测点的坐标。其中,M为预设的人体中心预测点的最大数量,M大于或等于1。As an example of this implementation, if the probability of any pixel in the image belonging to the center point of the human body is greater than the first threshold, the pixel can be determined as the predicted point of the center of the human body, that is, the pixel’s The coordinates are determined as the coordinates of the predicted point of the center of the human body. For example, the first threshold may be 0.5. Of course, those skilled in the art can flexibly set the size of the first threshold according to actual application scenario requirements, which is not limited here. As another example of this implementation, if the probability of any pixel in the image belonging to the center point of the human body is greater than the first threshold, and the pixel is the M pixel with the highest probability of belonging to the center point of the human body in the image One of the points, the pixel point can be determined as the predicted point of the human body center, that is, the coordinates of the pixel point can be determined as the coordinates of the predicted point of the human body center. Wherein, M is the maximum number of preset human body center prediction points, and M is greater than or equal to 1.
基于该实现方式得到的所述图像中的人体中心预测点的坐标进行危险动作识别,有助于提高危险动作识别的准确性。The dangerous action recognition is performed based on the coordinates of the human body center prediction point in the image obtained in this implementation manner, which helps to improve the accuracy of dangerous action recognition.
作为该实现方式的一个示例,所述基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率,包括:基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;所述基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。在该示例中,可以基于所述图像对应的第一特征图,预测图像的像素点属于人体中心点的概率,从所述图像的像素点中,筛选出所述图像中的人体中心点的第一候选点。其中,第一候选点可以表示从所述图像中筛选出的、属于人体中心点的概率较高的像素点。第一候选点的数量可以是多个。在一个示例中,可以将图像中的每一个像素点分别作为一个第一候选点,或者,在另一个示例中,还可以基于第一特征图进行人体检测,将检测到的人体框内的像素点作为第一候选点。As an example of this implementation, the predicting the probability that a pixel in the image belongs to the center point of the human body based on the first feature map corresponding to the image includes: determining the probability based on the first feature map corresponding to the image The coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point; the determination of the image based on the probability that the pixel point in the image belongs to the human body central point The coordinates of the human body center prediction point in the image include: determining the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point. In this example, based on the first feature map corresponding to the image, the probability that the pixels of the image belong to the center point of the human body can be predicted, and the pixel points of the image can be selected from the pixels of the image. a candidate point. Wherein, the first candidate point may represent a pixel point screened out from the image and having a higher probability of belonging to the center point of the human body. There may be multiple first candidate points. In one example, each pixel in the image can be used as a first candidate point, or, in another example, human body detection can also be performed based on the first feature map, and the pixels in the detected human body frame point as the first candidate point.
在一个例子中,若任一第一候选点属于人体中心点的概率大于第一阈值,则可以将该第一候选点确定为人体中心预测点,即,可以将该第一候选点的坐标确定为人体中心预测点的坐标。在另一个例子中,若任一第一候选点属于人体中心点的概率大于第一阈值,且该第一候选点为所述图像中属于人体中心点的概率最大的M个第一候选点之一,则可以将该第一候选点确定为人体中心预测点,即,可以将该第一候选点的坐标确定为人体中心预测点的坐标。在该示例中,通过基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,并基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,由此能够提高所确定的人体中心预测点的准确性。In one example, if the probability of any first candidate point belonging to the human body center point is greater than the first threshold, the first candidate point can be determined as the human body center prediction point, that is, the coordinates of the first candidate point can be determined The coordinates of the predicted point for the center of the body. In another example, if the probability of any first candidate point belonging to the center point of the human body is greater than the first threshold, and the first candidate point is one of the M first candidate points with the highest probability of belonging to the center point of the human body in the image If one, the first candidate point can be determined as the predicted point of the human body center, that is, the coordinates of the first candidate point can be determined as the coordinates of the predicted point of the human body center. In this example, based on the first feature map corresponding to the image, the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point are determined, and based on the The coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point determine the coordinates of the human body center prediction point in the image, thereby improving the accuracy of the determined human body center prediction point.
在一个示例中,所述基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述图像对应的第一特征图进行卷积操作,得到所述图像对应的第二特征图;基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。在该示例中,可以对第一特征图进行一次或两次以上卷积操作,得到所述图像对应的第二特征图。可以对第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,或者,可以对第二特征图处理后再进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。在该示例中,通过对所述图像对应的第一特征图进行卷积操作,能够提取所述图像的更深层的特征,所得到的第二特征图能够更准确地表征所述图像中的人体的位置信息。通过基于第二特征图进行最大池化操作,能够从所述图像的像素点中准确地筛选出第一候选点。In an example, the determining the coordinates of the first candidate point of the human body central point in the image and the probability that the first candidate point belongs to the human body central point based on the first feature map corresponding to the image includes: performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image; performing a maximum pooling operation based on the second feature map to obtain the first feature map of the center point of the human body in the image The coordinates of a candidate point and the probability that the first candidate point belongs to the center point of the human body. In this example, one or more convolution operations may be performed on the first feature map to obtain the second feature map corresponding to the image. A maximum pooling operation can be performed on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point, or the second feature map can be After the processing, the maximum pooling operation is performed to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point. In this example, by performing a convolution operation on the first feature map corresponding to the image, deeper features of the image can be extracted, and the obtained second feature map can more accurately represent the human body in the image location information. By performing a maximum pooling operation based on the second feature map, the first candidate points can be accurately screened out from the pixel points of the image.
在一个例子中,所述基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述第二特征图进行激活处理,得到所述图像对应的第三特征图;对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。例如,可以对第二特征图进行sigmoid处理,以将第二特征图的像素值转化为0到1之间的数值。当然,也可以采用其他激活函数对第二特征图进行激活处理,在此不做限定。在这个例子中,通过对第二特征图进行激活处理,能够将第二特征图的像素值转换为概率值,从而能够用于表示像素点属于人体中心点的概率。In an example, the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point, The method includes: performing activation processing on the second feature map to obtain a third feature map corresponding to the image; performing a maximum pooling operation on the third feature map to obtain a first candidate for a human center point in the image The coordinates of the point and the probability that the first candidate point belongs to the center point of the human body. For example, sigmoid processing may be performed on the second feature map to convert pixel values of the second feature map into values between 0 and 1. Of course, other activation functions may also be used to activate the second feature map, which is not limited here. In this example, by performing activation processing on the second feature map, the pixel value of the second feature map can be converted into a probability value, which can be used to represent the probability that the pixel point belongs to the center point of the human body.
例如,所述对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述第三特征图进行重叠的最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;所述基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率;根据所述第二候选点的坐标以及所述第二候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。例如,池化窗口的尺寸为P×P,步长为Q,P>Q,其中,P和Q均为大于或等于1的整数。例如,P等于3,Q等于1。在这个例子中,对相同坐标的第一候选点进行合并,可以得到第二候选点,即,第二候选点可以表示第一候选点的合并结果。第二候选点的数量小于或等于第一候选点的数量,第二候选点的数量大于或等于1,且第二候选点中不包括坐标相同的候选点。在这个例子中,通过对第三特征图进行重叠的最大池化操作,能够提高人体中心点检测的准确率;通过对相同坐标的第一候选点进行合并,得到第二候选点的坐标以及第二候选点属于人体中心点的概率,并根据第二候选点的坐标以及第二候选点属于人体中心点的概率,确定图像中的人体中心预测点的坐标,由此能够进一步提高人体中心点检测的准确率,并能够提高后续进行危险动作识别的效率。For example, performing the maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point includes: The third feature map performs an overlapping maximum pooling operation to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point; The coordinates of a candidate point and the probability that the first candidate point belongs to the center point of the human body, and determining the coordinates of the predicted point of the center of the human body in the image include: merging the first candidate points with the same coordinates to obtain the second The coordinates of the candidate point and the probability that the second candidate point belongs to the center point of the human body; according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the human body center prediction in the image The coordinates of the point. For example, the size of the pooling window is P×P, the step size is Q, and P>Q, where both P and Q are integers greater than or equal to 1. For example, P equals 3 and Q equals 1. In this example, the second candidate point can be obtained by merging the first candidate points with the same coordinates, that is, the second candidate point can represent the merging result of the first candidate point. The number of the second candidate points is less than or equal to the number of the first candidate points, the number of the second candidate points is greater than or equal to 1, and the second candidate points do not include candidate points with the same coordinates. In this example, the accuracy of human body center point detection can be improved by performing an overlapping maximum pooling operation on the third feature map; by merging the first candidate points with the same coordinates, the coordinates of the second candidate point and the first The probability that the two candidate points belong to the center point of the human body, and according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the coordinates of the predicted point of the center of the human body in the image, thereby further improving the detection of the center point of the human body The accuracy rate can improve the efficiency of subsequent dangerous action recognition.
在另一个例子中,所述基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述第二特征图进行最大池化操作,得到第四特征图;对第四特征图进行激活处理,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。In another example, the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point , including: performing a maximum pooling operation on the second feature map to obtain a fourth feature map; performing activation processing on the fourth feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the The probability that the first candidate point belongs to the center point of the human body.
在本公开实施例中,可以响应于所述乘员检测结果指示检测到一个乘员,基于所述图像以及所述乘员检测结果中该乘员的位置信息进行危险动作识别,得到该乘员对应的危险动作识别结果;可以响应于所述乘员检测结果指示检测到多个乘员,基于所述图像以及所述乘员检测结果中的各个乘员的位置信息进行危险动作识别,得到每个乘员各自对应的危险动作识别结果。In an embodiment of the present disclosure, in response to the occupant detection result indicating that an occupant is detected, dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result, and the dangerous action identification corresponding to the occupant is obtained. Result; in response to the occupant detection result indicating that multiple occupants have been detected, dangerous action recognition is performed based on the image and the position information of each occupant in the occupant detection result, and the corresponding dangerous action recognition result of each occupant is obtained. .
在一种可能的实现方式中,所述预设身体部位包括以下至少之一:手、胳膊、头、脚、腿。图2示出本公开实施例提供的危险动作的识别方法中车舱的图像中乘员的头部伸出车窗外的示意图。图3示出本公开实施例提供的危险动作的识别方法中车舱的图像中乘员的手或者胳膊伸出车窗外的示意图。在该实现方式中,通过响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息,识别乘员的手、胳膊、头、脚、腿中的至少之一伸出车窗外的动作,得到所述乘员对应的危险动作识别结果,由此能够基于乘员的位置准确地识别乘员将手、胳膊、头、脚、腿中的至少之一伸出车窗外的动作,从而能够提高车舱内的乘员的安全性。In a possible implementation manner, the preset body parts include at least one of the following: hands, arms, heads, feet, and legs. Fig. 2 shows a schematic diagram of the occupant's head protruding out of the vehicle window in the image of the vehicle cabin in the method for identifying dangerous actions provided by an embodiment of the present disclosure. FIG. 3 shows a schematic diagram of a passenger's hand or arm sticking out of a vehicle window in an image of a vehicle cabin in the method for identifying a dangerous action provided by an embodiment of the present disclosure. In this implementation manner, by responding to the occupant detection result indicating that an occupant is detected, based on the image and the occupant's position information in the occupant detection result, identify the hands, arms, head, feet, and legs of the occupant. At least one of the occupant's actions of sticking out of the vehicle window, and obtain the corresponding dangerous action recognition result of the occupant, so that the occupant can accurately identify at least one of the occupant's hand, arm, head, foot, and leg out of the vehicle window based on the occupant's position action, thereby improving the safety of the occupants in the cabin.
在一种可能的实现方式中,所述响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,包括:基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息;响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果。作为该实现方式的一个示例,可以将所述图像对应的第一特征图输入第二预测子网络,经由第二预测子网络得到所述车舱对应的危险动作预测 信息。作为该实现方式的另一个示例,可以采用预先设计的第三函数对所述图像对应的第一特征图进行处理,得到所述车舱对应的危险动作预测信息。其中,所述危险动作预测信息可以包括所述图像中的至少一个位置对应的乘员发生危险动作的预测信息。例如,所述危险动作预测信息可以包括所述图像中的各个像素对应的乘员发生危险动作的预测信息。在该实现方式中,通过基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息,并响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果,由此能够结合乘员的位置和车舱对应的危险动作预测信息准确地识别乘员将预设身体部位伸出车窗外的动作,从而能够提高车舱内的乘员的安全性。In a possible implementation manner, in response to the occupant detection result indicating that an occupant is detected, dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result to obtain the occupant's corresponding The recognition result of the dangerous action includes: based on the image, identifying the dangerous action in the cabin environment, and obtaining the dangerous action prediction information corresponding to the cabin; in response to the occupant detection result indicating that an occupant is detected, based on the The location information of the occupant in the occupant detection result and the dangerous action prediction information are obtained to obtain the dangerous action recognition result corresponding to the occupant. As an example of this implementation, the first feature map corresponding to the image can be input into the second prediction sub-network, and the dangerous action prediction information corresponding to the vehicle cabin can be obtained through the second prediction sub-network. As another example of this implementation manner, a pre-designed third function may be used to process the first feature map corresponding to the image to obtain dangerous action prediction information corresponding to the vehicle cabin. Wherein, the dangerous action prediction information may include prediction information that the occupant corresponding to at least one position in the image takes a dangerous action. For example, the dangerous action prediction information may include the prediction information of the occupant's dangerous action corresponding to each pixel in the image. In this implementation manner, by identifying dangerous actions in the cabin environment based on the image, obtaining dangerous action prediction information corresponding to the cabin, and responding to the occupant detection result indicating that an occupant is detected, based on the The position information of the occupant in the occupant detection result and the dangerous action prediction information are obtained to obtain the dangerous action recognition result corresponding to the occupant, so that the occupant's position and the dangerous action prediction information corresponding to the cabin can be accurately identified The occupant will preset body parts to extend out of the vehicle window, thereby improving the safety of the occupants in the vehicle cabin.
作为该实现方式的一个示例,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标,所述危险动作预测信息包括乘员的动作属于N种预设危险动作中的每一种的概率,其中,N为大于或等于1的整数;所述响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果,包括:响应于所述乘员检测结果指示存在人体中心预测点,确定检测到乘员;基于所述乘员检测结果中的人体中心预测点的坐标,从所述危险动作预测信息中获得所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率;根据所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率,得到所述人体中心预测点对应的乘员对应的危险动作识别结果。在该示例中,所述危险动作预测信息可以包括所述图像中的全部或部分像素点对应的乘员的动作属于N种预设危险动作中的每一种的概率。例如,所述危险动作预测信息可以包括所述图像中的各个像素点对应的乘员的动作属于N种预设危险动作中的每一种的概率。例如,危险动作预测信息可以为H×W×N的特征图或者三维数组,其中,H表示所述图像的高度,W表示所述图像的宽度。根据人体中心预测点的坐标,可以从危险动作预测信息中获得人体中心预测点对应的乘员属于N种预设危险动作中的每一种的概率,从而可以得到所述人体中心预测点对应的乘员对应的危险动作识别结果。在一个例子中,若任一人体中心预测点对应的乘员的动作属于N种预设危险动作中的任一种的概率大于第二阈值,则可以确定所述人体中心预测点对应的乘员对应的危险动作识别结果为发生危险动作;若任一人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率均小于或等于第二阈值,则可以确定所述人体中心预测点对应的乘员对应的危险动作识别结果为未发生危险动作。例如,第二阈值可以等于0.5。当然,本领域技术人员可以根据实际应用场景需求灵活设置第二阈值的大小,在此不做限定。根据该示例,能够准确地确定所述图像中的各个人体中心预测点对应的乘员对应的危险动作识别结果。As an example of this implementation, the occupant's position information includes the coordinates of the human body center prediction point in the image, and the dangerous action prediction information includes the occupant's action belonging to each of the N preset dangerous actions. Probability, wherein, N is an integer greater than or equal to 1; in response to the occupant detection result indicating that an occupant is detected, based on the occupant's position information in the occupant detection result and the dangerous action prediction information, obtain The dangerous action identification result corresponding to the occupant includes: in response to the occupant detection result indicating that there is a human body center prediction point, determining that the occupant is detected; based on the coordinates of the human body center prediction point in the occupant detection result, The probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions is obtained from the action prediction information; according to the occupant's action corresponding to the human body center prediction point, it belongs to the N preset dangerous actions The probability of each of them is used to obtain the dangerous action recognition result corresponding to the occupant corresponding to the human body center prediction point. In this example, the dangerous action prediction information may include the probability that the occupant's action corresponding to all or part of the pixels in the image belongs to each of the N preset dangerous actions. For example, the dangerous action prediction information may include the probability that the occupant's action corresponding to each pixel in the image belongs to each of the N preset dangerous actions. For example, the dangerous action prediction information may be an H×W×N feature map or a three-dimensional array, where H represents the height of the image, and W represents the width of the image. According to the coordinates of the human body center prediction point, the probability that the occupant corresponding to the human body center prediction point belongs to each of the N preset dangerous actions can be obtained from the dangerous action prediction information, so that the occupant corresponding to the human body center prediction point can be obtained The corresponding dangerous action recognition results. In an example, if the probability that the action of the occupant corresponding to any human body center prediction point belongs to any one of the N preset dangerous actions is greater than the second threshold, it can be determined that the occupant corresponding to the human body center prediction point corresponds to The result of dangerous action recognition is that a dangerous action has occurred; if the probability of the occupant's action corresponding to any human body center prediction point belonging to each of the N preset dangerous actions is less than or equal to the second threshold, the human body center can be determined. The dangerous action recognition result corresponding to the occupant corresponding to the prediction point is that no dangerous action has occurred. For example, the second threshold may be equal to 0.5. Of course, those skilled in the art can flexibly set the size of the second threshold according to the requirements of the actual application scenario, which is not limited here. According to this example, the dangerous action recognition result corresponding to the occupant corresponding to each human body center prediction point in the image can be accurately determined.
作为该实现方式的一个示例,所述基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息,包括:对所述图像对应的第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息。在该示例中,可以对第一特征图进行卷积操作,得到第五特征图;可以对第五特征图进行全连接操作,得到第六特征图;可以对第六特征图进行分类操作,得到所述车舱对应的危险动作预测信息。其中,可以对第一特征图进行一次或两次以上卷积操作,得到第五特征图。例如,可以对第一特征图进行两次卷积操作,得到第五特征图。可以对第五特征图进行一次或两次以上全连接操作,得到第六特征图。例如,可以对第五特征图进行一次全连接操作,得到第六特征图。本领域技术人员可以根据实际应用场景需求灵活确定卷积操作的次数和全连接 操作的次数,在此不做限定。在该示例中,通过对第一特征图进行卷积操作,能够提取所述图像的更深层的特征,由此得到的第五特征图能够更准确地表示所述图像中的危险动作的特征;通过对采用全连接层操作,能够提高网络的拟合能力,从而能够提高所得到的危险动作预测信息的准确性。As an example of this implementation, the identifying dangerous actions in the cabin environment based on the image, and obtaining the dangerous action prediction information corresponding to the cabin includes: a first feature map corresponding to the image After the convolution operation and the full connection operation, the classification operation is performed to obtain the dangerous action prediction information corresponding to the cabin. In this example, the convolution operation can be performed on the first feature map to obtain the fifth feature map; the full connection operation can be performed on the fifth feature map to obtain the sixth feature map; the sixth feature map can be classified to obtain Dangerous action prediction information corresponding to the cabin. Wherein, one or more convolution operations may be performed on the first feature map to obtain the fifth feature map. For example, two convolution operations may be performed on the first feature map to obtain the fifth feature map. One or more than two full connection operations can be performed on the fifth feature map to obtain the sixth feature map. For example, a full connection operation can be performed on the fifth feature map to obtain the sixth feature map. Those skilled in the art can flexibly determine the number of convolution operations and the number of full connection operations according to the requirements of actual application scenarios, which are not limited here. In this example, by performing a convolution operation on the first feature map, deeper features of the image can be extracted, and the resulting fifth feature map can more accurately represent the features of dangerous actions in the image; By using the fully connected layer operation, the fitting ability of the network can be improved, so that the accuracy of the obtained dangerous action prediction information can be improved.
以一种可能的实现方式中,在获得图像对应的第一特征图之后,通过两个分支分别进行人体中心点的定位和危险动作的分类。其中人体中心点定位分支通过对第一特征图进行进一步的卷积操作得到第二特征图,并在第二特征图的基础上进行最大池化操作、激活处理等获得人体中心点的坐标;危险动作分支通过对第一特征图进行卷积操作、全连接以及分类,获得每一个人体中心点对应的动作类别信息,从而可以利用获得车舱内每一个乘员的危险动作检测结果。且人体中心点定位和危险动作分类共享同一个特征提取网络,有利于提升结果的可靠性、节约计算资源。In a possible implementation manner, after the first feature map corresponding to the image is obtained, two branches are used to locate the center point of the human body and classify dangerous actions respectively. Among them, the human body center point positioning branch obtains the second feature map through further convolution operation on the first feature map, and performs maximum pooling operation and activation processing on the basis of the second feature map to obtain the coordinates of the human body center point; danger The action branch performs convolution operation, full connection, and classification on the first feature map to obtain the action category information corresponding to each human body center point, so that the dangerous action detection results of each occupant in the cabin can be obtained. In addition, human body center point positioning and dangerous action classification share the same feature extraction network, which is conducive to improving the reliability of the results and saving computing resources.
在一种可能的实现方式中,所述方法还包括:响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果。在该实现方式中,在车舱的前排座位区域检测到的乘员可以包括驾驶员和/或副驾驶员。由于驾驶员的危险动作和副驾驶员的危险动作均对驾驶员驾车有较大的影响,因此,该实现方式通过响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,并基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,由此有助于提高驾驶员驾车的安全性。In a possible implementation manner, the method further includes: in response to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, based on the image and the occupant detection result, The position information of the occupant is used for dangerous motion recognition, and the dangerous motion recognition result corresponding to the occupant is obtained. In this implementation, occupants detected in the front seating area of the vehicle cabin may include the driver and/or co-driver. Since both the driver's dangerous action and the co-pilot's dangerous action have a greater impact on the driver's driving, this implementation method responds to the occupant detection result indicating that the occupant detected in the front seat area of the cabin occupant, and perform dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtain the corresponding dangerous action recognition result of the occupant, thereby helping to improve the driving safety of the driver.
在一种可能的实现方式中,在所述得到所述乘员对应的危险动作识别结果之后,所述方法还包括:响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息。在该实现方式中,通过响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息,由此能够实现安全预警,从而有助于提高车舱内的乘员安全。In a possible implementation manner, after obtaining the dangerous action recognition result corresponding to the occupant, the method further includes: indicating that the occupant's action includes any preset action in response to the dangerous action identification result. Dangerous action, issue a warning message. In this implementation, by responding to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, a prompt message can be issued, thereby enabling a safety warning, thereby helping to improve safety in the cabin. occupant safety.
作为该实现方式的一个示例,所述发出提示信息,包括以下至少一项:控制车内的语音交互装置发出语音提示信息;发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令;发出双闪灯开启指令。在一个例子中,可以响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,控制车内的语音交互装置发出语音提示。例如,可以控制车内的语音交互装置发出“请不要伸出窗外”的语音提示信息。根据这个例子,可以通过语音的方式对乘员进行提示,由此即使乘员没有看向车机显示屏,也能获得提示信息。在一个例子中,可以响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令。例如,若发生所述预设危险动作的乘员为驾驶员,则发生所述预设危险动作的乘员对应的车窗可以为前排左侧的车窗;若发生所述预设危险动作的乘员为副驾驶员,则发生所述预设危险动作的乘员对应的车窗可以为前排右侧的车窗;若发生所述预设危险动作的乘员为后排左侧的乘员,则发生所述预设危险动作的乘员对应的车窗可以为后排左侧的车窗;若发生所述预设危险动作的乘员为后排右侧的乘员,则发生所述预设危险动作的乘员对应的车窗可以为后排右侧的车窗。在这个例子中,通过发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令,能够强化提示的效果,有助于使乘员在车窗升起或降下的瞬间下意识地收回伸出车窗外的身体部位。在一个例子中,可以响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出双闪灯开启指令,由此能够起到提示附近的车辆的效果,从而能够提高车舱内的乘员的安全性。As an example of this implementation, the sending out prompt information includes at least one of the following: controlling the voice interaction device in the vehicle to send out voice prompt information; command to lower; issue a command to turn on the double flashing lights. In an example, in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, the voice interaction device in the vehicle may be controlled to issue a voice prompt. For example, the voice interaction device in the car can be controlled to issue a voice prompt message of "Please do not stick out the window". According to this example, the occupant can be prompted by voice, so that even if the occupant does not look at the display screen of the vehicle, the occupant can obtain the prompt information. In one example, in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, an instruction to control the raising or lowering of the window corresponding to the occupant who has the preset dangerous action may be issued . For example, if the occupant who has the preset dangerous action is the driver, the window corresponding to the occupant who has the preset dangerous action can be the window on the left side of the front row; if the occupant who has the preset dangerous action If the passenger is the co-driver, the window corresponding to the occupant who has the preset dangerous action can be the window on the right side of the front row; The window corresponding to the occupant of the preset dangerous action may be the window on the left side of the rear row; The windows of the car can be the windows on the right side of the rear row. In this example, the effect of the reminder can be strengthened by issuing an instruction to control the raising or lowering of the window corresponding to the occupant who has the preset dangerous action, which helps the occupant to subconsciously Retract body parts sticking out of the car window. In one example, in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, an instruction to turn on the double flashing lights can be issued, which can have the effect of reminding nearby vehicles, thereby enabling Improve the safety of occupants in the cabin.
下面通过一个具体的应用场景说明本公开实施例提供的危险动作的识别方法。图4示出本公开提供的危险动作的识别方法的应用场景的示意图。在图4所示的示例中,可以获取车舱的图像。例如,车舱的图像的尺寸可以为640×480。可以将所述图像输入骨干网络,经由骨干网络对所述图像进行特征提取,得到第一特征图,其中,第一特征图的尺寸可以为80×60×C,其中,C表示第一特征图的通道数,C可以大于或等于3。The method for identifying dangerous actions provided by the embodiments of the present disclosure will be described below through a specific application scenario. Fig. 4 shows a schematic diagram of an application scenario of the method for identifying a dangerous action provided by the present disclosure. In the example shown in Fig. 4, an image of the cabin of a vehicle may be acquired. For example, the size of an image of a car cabin may be 640x480. The image can be input into the backbone network, and feature extraction is performed on the image via the backbone network to obtain a first feature map, where the size of the first feature map can be 80×60×C, where C represents the first feature map The number of channels, C can be greater than or equal to 3.
可以通过第一预测子网络对第一特征图进行卷积操作,得到第二特征图,其中,第二特征图的尺寸可以为80×60×3。可以通过sigmoid函数对第二特征图的第0通道进行激活处理,得到第三特征图。可以对第三特征图进行池化窗口尺寸为3×3、步长为1的最大池化操作,获得所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,其中,第一候选点的数量可以为60×80。可以对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率。可以将属于人体中心点的概率最大的M个第二候选点中、属于人体中心点的概率大于0.5的第二候选点确定为人体中心预测点。可以从第二特征图的第1通道中获得人体中心预测点对应的人体框的长度,并可以从第二特征图的第2通道中获得人体中心预测点对应的人体框的宽度。The first feature map may be convoluted through the first prediction sub-network to obtain the second feature map, where the size of the second feature map may be 80×60×3. The 0th channel of the second feature map can be activated through the sigmoid function to obtain the third feature map. A maximum pooling operation with a pooling window size of 3×3 and a step size of 1 can be performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the first candidate point belonging to The probability of the center point of the human body, wherein the number of the first candidate points may be 60×80. The first candidate points with the same coordinates may be combined to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body. Among the M second candidate points with the highest probability of belonging to the human body center point, the second candidate point whose probability of belonging to the human body center point is greater than 0.5 may be determined as the body center prediction point. The length of the human body frame corresponding to the human body center prediction point can be obtained from the first channel of the second feature map, and the width of the human body frame corresponding to the human body center prediction point can be obtained from the second channel of the second feature map.
可以将第一特征图输入第二预测子网络,经由第二预测子网络对第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息,其中,所述危险动作预测信息可以包括所述图像中的各个像素点对应的乘员的动作属于N种预设危险动作中的每一种的概率。例如,危险动作预测信息可以为640×480×N的特征图或者三维数组。根据人体中心预测点的坐标,可以从危险动作预测信息中获得人体中心预测点对应的乘员属于N种预设危险动作中的每一种的概率,从而可以得到所述人体中心预测点对应的乘员对应的危险动作识别结果。The first feature map can be input into the second prediction sub-network, and the first feature map is subjected to a convolution operation and a full-connection operation through the second prediction sub-network, and then a classification operation is performed to obtain the dangerous action prediction information corresponding to the cabin, wherein The dangerous action prediction information may include the probability that the occupant's action corresponding to each pixel in the image belongs to each of the N preset dangerous actions. For example, the dangerous action prediction information may be a 640×480×N feature map or a three-dimensional array. According to the coordinates of the human body center prediction point, the probability that the occupant corresponding to the human body center prediction point belongs to each of the N preset dangerous actions can be obtained from the dangerous action prediction information, so that the occupant corresponding to the human body center prediction point can be obtained The corresponding dangerous action recognition results.
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。It can be understood that the above-mentioned method embodiments mentioned in this disclosure can all be combined with each other to form a combined embodiment without violating the principle and logic. Due to space limitations, this disclosure will not repeat them. Those skilled in the art can understand that, in the above method in the specific implementation manner, the specific execution order of each step should be determined according to its function and possible internal logic.
此外,本公开还提供了危险动作的识别装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种危险动作的识别方法,相应技术方案和技术效果可参见方法部分的相应记载,不再赘述。In addition, the present disclosure also provides a dangerous action identification device, electronic equipment, computer-readable storage medium, and program, all of which can be used to implement any of the dangerous action identification methods provided in the present disclosure. The corresponding technical solutions and technical effects can be found in The corresponding records in the method part will not be repeated here.
图5示出本公开实施例提供的危险动作的识别装置的框图。如图5所示,所述危险动作的识别装置包括:Fig. 5 shows a block diagram of an apparatus for identifying a dangerous action provided by an embodiment of the present disclosure. As shown in Figure 5, the identification device of the dangerous action includes:
获取模块51,用于获取车舱的图像;Obtaining module 51, for obtaining the image of cabin;
乘员检测模块52,用于基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果;The occupant detection module 52 is configured to perform occupant detection on the vehicle cabin based on the image, and obtain an occupant detection result of the vehicle cabin;
第一危险动作识别模块53,用于响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。The first dangerous action identification module 53 is configured to respond to the occupant detection result indicating that an occupant is detected, perform dangerous action identification based on the image and the occupant's position information in the occupant detection result, and obtain the occupant's corresponding A dangerous action recognition result, wherein the dangerous action represents an action in which a predetermined body part sticks out of a car window.
在一种可能的实现方式中,所述第一危险动作识别模块53用于:In a possible implementation manner, the first dangerous action recognition module 53 is configured to:
基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息;Identifying dangerous actions in the cabin environment based on the image, and obtaining dangerous action prediction information corresponding to the cabin;
响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果。In response to the occupant detection result indicating that an occupant is detected, a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标,所述危险动作预测信息包括乘员的动作属于N种预设危险动作中的每一种的概率,其中,N为大于或等于1的整数;In a possible implementation manner, the position information of the occupant includes the coordinates of a human body center prediction point in the image, and the dangerous action prediction information includes that the occupant's action belongs to each of the N preset dangerous actions The probability of , where N is an integer greater than or equal to 1;
所述第一危险动作识别模块53用于:The first dangerous action recognition module 53 is used for:
响应于所述乘员检测结果指示存在人体中心预测点,确定检测到乘员;determining that an occupant is detected in response to the occupant detection result indicating that there is a human body center prediction point;
基于所述乘员检测结果中的人体中心预测点的坐标,从所述危险动作预测信息中获得所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率;Based on the coordinates of the human body center prediction point in the occupant detection result, obtain from the dangerous action prediction information the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions;
根据所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率,得到所述人体中心预测点对应的乘员对应的危险动作识别结果。According to the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions, the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
在一种可能的实现方式中,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标;In a possible implementation manner, the occupant's position information includes the coordinates of a human body center prediction point in the image;
所述乘员检测模块52用于:The occupant detection module 52 is used for:
基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率;Based on the first feature map corresponding to the image, predicting the probability that the pixel in the image belongs to the center point of the human body;
基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Based on the probability that the pixel point in the image belongs to the center point of the human body, the coordinates of the predicted point of the center of the human body in the image are determined.
在一种可能的实现方式中,所述乘员检测模块52用于:In a possible implementation manner, the occupant detection module 52 is used for:
基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;Based on the first feature map corresponding to the image, determine the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point;
基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the center point of the human body, the coordinates of the predicted point of the human body center in the image are determined.
在一种可能的实现方式中,所述乘员检测模块52用于:In a possible implementation manner, the occupant detection module 52 is used for:
对所述图像对应的第一特征图进行卷积操作,得到所述图像对应的第二特征图;performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image;
基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述乘员检测模块52用于:In a possible implementation manner, the occupant detection module 52 is used for:
对所述第二特征图进行激活处理,得到所述图像对应的第三特征图;performing activation processing on the second feature map to obtain a third feature map corresponding to the image;
对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
在一种可能的实现方式中,所述乘员检测模块52用于:In a possible implementation manner, the occupant detection module 52 is used for:
对所述第三特征图进行重叠的最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;Performing an overlapping maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point;
对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率;Merging the first candidate points with the same coordinates to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body;
根据所述第二候选点的坐标以及所述第二候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Determine the coordinates of the human body center prediction point in the image according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the human body center point.
在一种可能的实现方式中,所述第一危险动作识别模块53用于:In a possible implementation manner, the first dangerous action recognition module 53 is configured to:
对所述图像对应的第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息。Performing a convolution operation and a full connection operation on the first feature map corresponding to the image, and then performing a classification operation to obtain dangerous action prediction information corresponding to the cabin.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the device further includes:
第二危险动作识别模块,用于响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果。The second dangerous action recognition module is configured to respond to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, and perform dangerous actions based on the image and the occupant's position information in the occupant detection result. Action recognition, obtaining the dangerous action recognition result corresponding to the occupant.
在一种可能的实现方式中,所述预设身体部位包括以下至少之一:手、胳膊、头、脚、腿。In a possible implementation manner, the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the device further includes:
提示模块,用于响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息。A prompting module, configured to issue prompting information in response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action.
在一种可能的实现方式中,所述提示模块用于以下至少一项:In a possible implementation manner, the prompt module is used for at least one of the following:
控制车内的语音交互装置发出语音提示信息;Control the voice interaction device in the car to issue voice prompt information;
发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令;Issue an instruction to control the raising or lowering of the vehicle window corresponding to the occupant who has the preset dangerous action;
发出双闪灯开启指令。Issue a command to turn on the double flash.
在本公开实施例中,通过获取车舱的图像,基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,并响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作,由此能够基于乘员的位置准确地识别乘员将预设身体部位伸出车窗外的动作,从而能够提高车舱内的乘员的安全性。In the embodiment of the present disclosure, by acquiring the image of the vehicle cabin, based on the image, the occupant detection is performed on the vehicle cabin, the occupant detection result of the vehicle cabin is obtained, and the occupant is detected in response to the occupant detection result performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action recognition result corresponding to the occupant, wherein the dangerous action represents a predetermined body part sticking out of the window Therefore, based on the position of the occupant, the occupant's action of extending the preset body part out of the vehicle window can be accurately recognized, thereby improving the safety of the occupant in the vehicle cabin.
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现和技术效果可以参照上文方法实施例的描述,为了简洁,这里不再赘述。In some embodiments, the functions or modules included in the device provided by the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments, and its specific implementation and technical effects can refer to the descriptions of the above method embodiments, for It is concise and will not be repeated here.
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。其中,所述计算机可读存储介质可以是非易失性计算机可读存储介质,或者可以是易失性计算机可读存储介质。An embodiment of the present disclosure also provides a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented. Wherein, the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
本公开实施例还提出一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。An embodiment of the present disclosure also proposes a computer program, including computer readable codes. When the computer readable codes are run in an electronic device, a processor in the electronic device executes the above method.
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述方法。An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in an electronic device , the processor in the electronic device executes the above method.
本公开实施例还提供一种电子设备,包括:一个或多个处理器;用于存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行上述方法。An embodiment of the present disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to call the executable instructions stored in the memory instruction to perform the above method.
电子设备可以被提供为终端、服务器或其它形态的设备。Electronic devices may be provided as terminals, servers, or other forms of devices.
图6示出本公开实施例提供的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。FIG. 6 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure. For example, the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
参照图6,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。The processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理***,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。The power supply component 806 provides power to various components of the electronic device 800 . Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。The multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 . In some embodiments, the audio component 810 also includes a speaker for outputting audio signals.
I/O接口812为处理组件802和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(CMOS)或电荷耦合装置(CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。 Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 . For example, the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 . Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 814 may also include an optical sensor, such as a complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(Wi-Fi)、第二代移动通信技术(2G)、第三代移动通信技术(3G)、***移动通信技术(4G)/通用移动通信技术的长期演进(LTE)、第五代移动通信 技术(5G)或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 can access wireless networks based on communication standards, such as wireless networks (Wi-Fi), second-generation mobile communication technologies (2G), third-generation mobile communication technologies (3G), fourth-generation mobile communication technologies (4G )/long-term evolution (LTE) of universal mobile communication technology, fifth generation mobile communication technology (5G) or their combination. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
图7示出本公开实施例提供的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图7,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。FIG. 7 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7 , electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 1922 is configured to execute instructions to perform the above method.
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作***,例如微软服务器操作***(Windows Server TM),苹果公司推出的基于图形用户界面操作***(Mac OS X TM),多用户多进程的计算机操作***(Unix TM),自由和开放原代码的类Unix操作***(Linux TM),开放原代码的类Unix操作***(FreeBSD TM)或类似。 Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 . The electronic device 1900 can operate based on the operating system stored in the memory 1932, such as the Microsoft server operating system (Windows Server TM ), the graphical user interface-based operating system (Mac OS X TM ) introduced by Apple Inc., and the multi-user and multi-process computer operating system (Unix ), a free and open source Unix-like operating system (Linux ), an open source Unix-like operating system (FreeBSD ), or the like.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
本公开可以是***、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。The present disclosure can be a system, method and/or computer program product. A computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. A computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above. As used herein, computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。 每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages. Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect). In some embodiments, an electronic circuit, such as a programmable logic circuit, field programmable gate array (FPGA), or programmable logic array (PLA), can be customized by utilizing state information of computer-readable program instructions, which can Various aspects of the present disclosure are implemented by executing computer readable program instructions.
这里参照根据本公开实施例的方法、装置(***)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operational steps are performed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , so that instructions executed on computers, other programmable data processing devices, or other devices implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。The computer program product can be specifically realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. Wait.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Having described various embodiments of the present disclosure above, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principle of each embodiment, practical application or improvement of technology in the market, or to enable other ordinary skilled in the art to understand each embodiment disclosed herein.

Claims (17)

  1. 一种危险动作的识别方法,其特征在于,包括:A method for identifying dangerous actions, characterized by comprising:
    获取车舱的图像;Get an image of the cabin;
    基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果;Based on the image, perform occupant detection on the cabin to obtain occupant detection results in the cabin;
    响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。Responding to the occupant detection result indicating that an occupant is detected, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining a dangerous action identification result corresponding to the occupant, wherein the dangerous Motion represents the motion of a preset body part sticking out of the car window.
  2. 根据权利要求1所述的方法,其特征在于,所述响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,包括:The method according to claim 1, wherein, in response to the occupant detection result indicating that an occupant is detected, dangerous action recognition is performed based on the image and the occupant's position information in the occupant detection result, to obtain The dangerous action recognition result corresponding to the occupant includes:
    基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息;Identifying dangerous actions in the cabin environment based on the image, and obtaining dangerous action prediction information corresponding to the cabin;
    响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果。In response to the occupant detection result indicating that an occupant is detected, a dangerous action recognition result corresponding to the occupant is obtained based on the occupant's position information in the occupant detection result and the dangerous action prediction information.
  3. 根据权利要求2所述的方法,其特征在于,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标,所述危险动作预测信息包括乘员的动作属于N种预设危险动作中的每一种的概率,其中,N为大于或等于1的整数;The method according to claim 2, wherein the position information of the occupant includes the coordinates of a human body center prediction point in the image, and the dangerous action prediction information includes that the occupant's action belongs to N preset dangerous actions The probability of each of , where N is an integer greater than or equal to 1;
    所述响应于所述乘员检测结果指示检测到乘员,基于所述乘员检测结果中所述乘员的位置信息,以及所述危险动作预测信息,得到所述乘员对应的危险动作识别结果,包括:In response to the occupant detection result indicating that an occupant has been detected, obtaining a dangerous action recognition result corresponding to the occupant based on the occupant's position information in the occupant detection result and the dangerous action prediction information includes:
    响应于所述乘员检测结果指示存在人体中心预测点,确定检测到乘员;determining that an occupant is detected in response to the occupant detection result indicating that there is a human body center prediction point;
    基于所述乘员检测结果中的人体中心预测点的坐标,从所述危险动作预测信息中获得所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率;Based on the coordinates of the human body center prediction point in the occupant detection result, obtain from the dangerous action prediction information the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions;
    根据所述人体中心预测点对应的乘员的动作属于N种预设危险动作中的每一种的概率,得到所述人体中心预测点对应的乘员对应的危险动作识别结果。According to the probability that the occupant's action corresponding to the human body center prediction point belongs to each of the N preset dangerous actions, the dangerous action identification result corresponding to the occupant corresponding to the human body center prediction point is obtained.
  4. 根据权利要求1至3中任意一项所述的方法,其特征在于,所述乘员的位置信息包括所述图像中的人体中心预测点的坐标;The method according to any one of claims 1 to 3, wherein the position information of the occupant includes the coordinates of a human body center prediction point in the image;
    所述基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果,包括:Based on the image, performing occupant detection on the cabin to obtain occupant detection results in the cabin, including:
    基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率;Based on the first feature map corresponding to the image, predicting the probability that the pixel in the image belongs to the center point of the human body;
    基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。Based on the probability that the pixel point in the image belongs to the center point of the human body, the coordinates of the predicted point of the center of the human body in the image are determined.
  5. 根据权利要求4所述的方法,其特征在于,The method according to claim 4, characterized in that,
    所述基于所述图像对应的第一特征图,预测所述图像中的像素点属于人体中心点的概率,包括:基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;The predicting the probability that the pixel in the image belongs to the center point of the human body based on the first feature map corresponding to the image includes: determining the center point of the human body in the image based on the first feature map corresponding to the image The coordinates of the first candidate point and the probability that the first candidate point belongs to the center point of the human body;
    所述基于所述图像中的像素点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。The determining the coordinates of the human body center prediction point in the image based on the probability that the pixel point in the image belongs to the human body center point includes: based on the coordinates of the first candidate point and the fact that the first candidate point belongs to the human body The probability of the center point determines the coordinates of the predicted point of the center of the human body in the image.
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述图像对应的第一特征图,确定所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:The method according to claim 5, wherein, based on the first feature map corresponding to the image, the coordinates of the first candidate point of the center point of the human body in the image are determined and the first candidate point belongs to The probability of the center point of the human body, including:
    对所述图像对应的第一特征图进行卷积操作,得到所述图像对应的第二特征图;performing a convolution operation on the first feature map corresponding to the image to obtain a second feature map corresponding to the image;
    基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述第二特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:The method according to claim 6, wherein the maximum pooling operation is performed based on the second feature map to obtain the coordinates of the first candidate point of the human center point in the image and the first candidate point The probability that the point belongs to the center point of the human body, including:
    对所述第二特征图进行激活处理,得到所述图像对应的第三特征图;performing activation processing on the second feature map to obtain a third feature map corresponding to the image;
    对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率。A maximum pooling operation is performed on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point.
  8. 根据权利要求7所述的方法,其特征在于,The method according to claim 7, characterized in that,
    所述对所述第三特征图进行最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率,包括:对所述第三特征图进行重叠的最大池化操作,得到所述图像中的人体中心点的第一候选点的坐标以及所述第一候选点属于人体中心点的概率;The performing the maximum pooling operation on the third feature map to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point includes: The third feature map performs an overlapping maximum pooling operation to obtain the coordinates of the first candidate point of the human body center point in the image and the probability that the first candidate point belongs to the human body center point;
    所述基于所述第一候选点的坐标以及所述第一候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标,包括:对相同坐标的所述第一候选点进行合并,得到第二候选点的坐标以及所述第二候选点属于人体中心点的概率;根据所述第二候选点的坐标以及所述第二候选点属于人体中心点的概率,确定所述图像中的人体中心预测点的坐标。The determining the coordinates of the human body center prediction point in the image based on the coordinates of the first candidate point and the probability that the first candidate point belongs to the human body center point includes: for the first candidate point with the same coordinates Merge to obtain the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body; according to the coordinates of the second candidate point and the probability that the second candidate point belongs to the center point of the human body, determine the The coordinates of the predicted point of the human body center in the image.
  9. 根据权利要求2或3所述的方法,其特征在于,所述基于所述图像,识别所述车舱环境中的危险动作,得到所述车舱对应的危险动作预测信息,包括:The method according to claim 2 or 3, wherein the identifying dangerous actions in the cabin environment based on the image, and obtaining the dangerous action prediction information corresponding to the cabin includes:
    对所述图像对应的第一特征图进行卷积操作和全连接操作之后进行分类操作,得到所述车舱对应的危险动作预测信息。Performing a convolution operation and a full connection operation on the first feature map corresponding to the image, and then performing a classification operation to obtain dangerous action prediction information corresponding to the cabin.
  10. 根据权利要求1至9中任意一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 9, further comprising:
    响应于所述乘员检测结果指示在所述车舱的前排座位区域检测到乘员,基于所述图像以及所述乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果。Responding to the occupant detection result indicating that an occupant is detected in the front seat area of the cabin, performing dangerous action recognition based on the image and the occupant's position information in the occupant detection result, and obtaining the occupant's corresponding Dangerous action recognition results.
  11. 根据权利要求1至10中任意一项所述的方法,其特征在于,所述预设身体部位包括以下至少之一:手、胳膊、头、脚、腿。The method according to any one of claims 1 to 10, wherein the preset body parts include at least one of the following: hands, arms, heads, feet, and legs.
  12. 根据权利要求1至11中任意一项所述的方法,其特征在于,在所述得到所述乘员对应的危险动作识别结果之后,所述方法还包括:The method according to any one of claims 1 to 11, characterized in that, after obtaining the recognition result of the dangerous action corresponding to the occupant, the method further comprises:
    响应于所述危险动作识别结果指示所述乘员的动作包括任意一种预设危险动作,发出提示信息。In response to the dangerous action recognition result indicating that the occupant's action includes any preset dangerous action, a prompt message is issued.
  13. 根据权利要求12所述的方法,其特征在于,所述发出提示信息,包括以下至少一项:The method according to claim 12, characterized in that the sending out prompt information includes at least one of the following:
    控制车内的语音交互装置发出语音提示信息;Control the voice interaction device in the car to issue voice prompt information;
    发出控制发生所述预设危险动作的乘员对应的车窗升起或降下的指令;Issue an instruction to control the raising or lowering of the vehicle window corresponding to the occupant who has the preset dangerous action;
    发出双闪灯开启指令。Issue a command to turn on the double flash.
  14. 一种危险动作的识别装置,其特征在于,包括:A device for identifying dangerous actions, characterized by comprising:
    获取模块,用于获取车舱的图像;The acquisition module is used to acquire the image of the cabin;
    乘员检测模块,用于基于所述图像,对所述车舱进行乘员检测,得到所述车舱的乘员检测结果;An occupant detection module, configured to perform occupant detection on the vehicle cabin based on the image, and obtain an occupant detection result of the vehicle cabin;
    第一危险动作识别模块,用于响应于所述乘员检测结果指示检测到乘员,基于所述图像以及所述 乘员检测结果中所述乘员的位置信息进行危险动作识别,得到所述乘员对应的危险动作识别结果,其中,所述危险动作表示预设身体部位伸出车窗外的动作。The first dangerous action identification module is configured to respond to the occupant detection result indicating that an occupant is detected, perform dangerous action identification based on the image and the occupant's position information in the occupant detection result, and obtain the corresponding danger of the occupant Action recognition results, wherein the dangerous action represents an action in which a preset body part sticks out of a car window.
  15. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    一个或多个处理器;one or more processors;
    用于存储可执行指令的存储器;memory for storing executable instructions;
    其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行权利要求1至13中任意一项所述的方法。Wherein, the one or more processors are configured to call executable instructions stored in the memory to perform the method according to any one of claims 1 to 13.
  16. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至13中任意一项所述的方法。A computer-readable storage medium, on which computer program instructions are stored, wherein, when the computer program instructions are executed by a processor, the method according to any one of claims 1 to 13 is implemented.
  17. 一种计算机程序产品,其特征在于,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行权利要求1至13中任意一项所述的方法。A computer program product, characterized in that it includes computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are run in an electronic device, the A processor in the electronic device executes the method of any one of claims 1-13.
PCT/CN2021/126895 2021-06-30 2021-10-28 Dangerous action identifying method and apparatus, electronic device, and storage medium WO2023273060A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023544368A JP2024506809A (en) 2021-06-30 2021-10-28 Methods and devices for identifying dangerous acts, electronic devices, and storage media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110735201.6 2021-06-30
CN202110735201.6A CN113486759B (en) 2021-06-30 2021-06-30 Dangerous action recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023273060A1 true WO2023273060A1 (en) 2023-01-05

Family

ID=77936973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/126895 WO2023273060A1 (en) 2021-06-30 2021-10-28 Dangerous action identifying method and apparatus, electronic device, and storage medium

Country Status (3)

Country Link
JP (1) JP2024506809A (en)
CN (1) CN113486759B (en)
WO (1) WO2023273060A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486759B (en) * 2021-06-30 2023-04-28 上海商汤临港智能科技有限公司 Dangerous action recognition method and device, electronic equipment and storage medium
CN113955594B (en) * 2021-10-18 2024-02-27 日立楼宇技术(广州)有限公司 Elevator control method and device, computer equipment and storage medium
CN116039554A (en) * 2023-01-17 2023-05-02 江铃汽车股份有限公司 In-vehicle rear-row safety monitoring method and device, readable storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243067A1 (en) * 2016-02-22 2017-08-24 Xerox Corporation Side window detection through use of spatial probability maps
CN110399767A (en) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 Occupant's dangerous play recognition methods and device, electronic equipment, storage medium
CN110969130A (en) * 2019-12-03 2020-04-07 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN111931639A (en) * 2020-08-07 2020-11-13 上海商汤临港智能科技有限公司 Driver behavior detection method and device, electronic equipment and storage medium
CN113486759A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Dangerous action recognition method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679490B (en) * 2017-09-29 2019-06-28 百度在线网络技术(北京)有限公司 Method and apparatus for detection image quality
CN111301280A (en) * 2018-12-11 2020-06-19 北京嘀嘀无限科技发展有限公司 Dangerous state identification method and device
CN110838125B (en) * 2019-11-08 2024-03-19 腾讯医疗健康(深圳)有限公司 Target detection method, device, equipment and storage medium for medical image
CN112001348A (en) * 2020-08-31 2020-11-27 上海商汤临港智能科技有限公司 Method and device for detecting passenger in vehicle cabin, electronic device and storage medium
CN112906617B (en) * 2021-03-08 2023-05-16 济南中凌电子科技有限公司 Method and system for identifying abnormal behavior of driver based on hand detection
CN112926510A (en) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 Abnormal driving behavior recognition method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243067A1 (en) * 2016-02-22 2017-08-24 Xerox Corporation Side window detection through use of spatial probability maps
CN110399767A (en) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 Occupant's dangerous play recognition methods and device, electronic equipment, storage medium
CN110969130A (en) * 2019-12-03 2020-04-07 厦门瑞为信息技术有限公司 Driver dangerous action identification method and system based on YOLOV3
CN111931639A (en) * 2020-08-07 2020-11-13 上海商汤临港智能科技有限公司 Driver behavior detection method and device, electronic equipment and storage medium
CN113486759A (en) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 Dangerous action recognition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113486759A (en) 2021-10-08
CN113486759B (en) 2023-04-28
JP2024506809A (en) 2024-02-15

Similar Documents

Publication Publication Date Title
WO2023273060A1 (en) Dangerous action identifying method and apparatus, electronic device, and storage medium
WO2023273064A1 (en) Object speaking detection method and apparatus, electronic device, and storage medium
WO2022041671A1 (en) Steering wheel hands-off detection method and apparatus, electronic device, and storage medium
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
WO2022183661A1 (en) Event detection method and apparatus, electronic device, storage medium, and program product
US20160066127A1 (en) Method for controlling and an electronic device thereof
US20200250495A1 (en) Anchor determination method and apparatus, electronic device, and storage medium
WO2022041670A1 (en) Occupant detection method and apparatus in vehicle cabin, electronic device, and storage medium
WO2023071174A1 (en) Occupancy detection method and apparatus, electronic device, and storage medium
CN112124073B (en) Intelligent driving control method and device based on alcohol detection
WO2022142331A1 (en) Control method and apparatus for vehicle-mounted display screen, and electronic device and storage medium
WO2022183663A1 (en) Event detection method and apparatus, and electronic device, storage medium and program product
WO2023273063A1 (en) Passenger speaking detection method and apparatus, and electronic device and storage medium
CN114332941A (en) Alarm prompting method and device based on riding object detection and electronic equipment
CN114407630A (en) Vehicle door control method and device, electronic equipment and storage medium
WO2022141969A1 (en) Image segmentation method and apparatus, electronic device, storage medium, and program
CN113989889A (en) Shading plate adjusting method and device, electronic equipment and storage medium
CN113060144A (en) Distraction reminding method and device, electronic equipment and storage medium
WO2023071175A1 (en) Method and apparatus for associating person with object in vehicle, and electronic device and storage medium
WO2023029407A1 (en) Method and apparatus for vehicle to send information to emergency call center
CN109889693B (en) Video processing method and device, electronic equipment and storage medium
CN114013367B (en) High beam use reminding method and device, electronic equipment and storage medium
CN114495072A (en) Occupant state detection method and apparatus, electronic device, and storage medium
CN113505674A (en) Face image processing method and device, electronic equipment and storage medium
CN115035500A (en) Vehicle door control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023544368

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE