CN113743290A - Method and device for sending information to emergency call center for vehicle - Google Patents

Method and device for sending information to emergency call center for vehicle Download PDF

Info

Publication number
CN113743290A
CN113743290A CN202111016361.1A CN202111016361A CN113743290A CN 113743290 A CN113743290 A CN 113743290A CN 202111016361 A CN202111016361 A CN 202111016361A CN 113743290 A CN113743290 A CN 113743290A
Authority
CN
China
Prior art keywords
detection
bleeding
cabin
image information
occupant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111016361.1A
Other languages
Chinese (zh)
Inventor
邵昌旭
许亮
李轲
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202111016361.1A priority Critical patent/CN113743290A/en
Publication of CN113743290A publication Critical patent/CN113743290A/en
Priority to KR1020247009195A priority patent/KR20240046910A/en
Priority to PCT/CN2022/078010 priority patent/WO2023029407A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/10Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4504Bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • A61B5/747Arrangements for interactive communication between patient and care services, e.g. by using a telephone network in case of emergency, i.e. alerting emergency services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Physiology (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Geometry (AREA)
  • Pulmonology (AREA)
  • Human Computer Interaction (AREA)
  • Environmental & Geological Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Rheumatology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Vascular Medicine (AREA)
  • Psychiatry (AREA)
  • Nursing (AREA)
  • Emergency Medicine (AREA)

Abstract

The present disclosure relates to a method and apparatus for transmitting information to an emergency call center, an electronic device, and a storage medium for a vehicle, the method including: acquiring image information of passengers in the cabin in response to the triggering of the emergency call; detecting the bleeding condition of the passenger in the cabin based on the image information; in response to detecting a bleeding condition, sending the bleeding condition to an emergency call center.

Description

Method and device for sending information to emergency call center for vehicle
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for sending information to an emergency call center for a vehicle, an electronic device, and a storage medium.
Background
In the road transportation process, traffic accidents may occur to automobiles, and at the moment, if rescue workers can timely acquire accident information and carry out rescue, accident parties can be timely rescued, so that property loss is reduced, and casualties are reduced.
In order to enable rescue workers to know accident information in time, an automobile-mounted emergency call (eCall) system can be integrated on an automobile, and the eCall system belongs to typical application of the internet of vehicles. Based on technologies such as automobile sensing, mobile communication and satellite positioning, the emergency rescue system is in contact with a public rescue center at the first time after an accident occurs, automatically sends the position of the vehicle and vehicle information to the rescue center, and the rescue center rescues accident personnel after confirming the accident.
However, the conventional emergency call function has difficulty in determining the injury of the occupant in the accident after the accident occurs, and the emergency call cannot confirm the injury of the occupant in the vehicle.
Disclosure of Invention
The present disclosure provides an information transmission technical scheme.
According to an aspect of the present disclosure, there is provided a method for a vehicle to transmit information to an emergency call center, including:
acquiring image information of passengers in the cabin in response to the triggering of the emergency call;
detecting the bleeding condition of the passenger in the cabin based on the image information;
in response to detecting a bleeding condition, sending the bleeding condition to an emergency call center.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes:
carrying out face detection and/or human body detection on the image information to determine passengers in the cabin;
and carrying out blood detection on the face and/or the body surface of the passenger to determine the bleeding condition of the passenger in the cabin.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes:
based on the color information of blood and the shape information of blood flow, whether the occupant bleeds is detected based on the image information.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes:
detecting a body surface area of the passenger in the cabin based on the image information;
dividing a body surface area of the occupant into a plurality of detection areas;
detecting blood information in each detection area to obtain an area detection result of each detection area;
determining a bleeding situation of the occupant based on a region detection result of each of the detection regions.
In one possible implementation, the detecting a body surface area of the occupant in the cabin based on the image information includes:
detecting a face surface area of an occupant in the cabin based on the image information;
the dividing of the body surface area of the occupant into a plurality of detection areas includes:
dividing the face surface area of the occupant into a plurality of detection regions.
In a possible implementation manner, the detecting blood information in each detection area to obtain an area detection result of each detection area includes:
determining a first confidence level that a bleeding condition exists in each detection region based on the shape and area of the blood flow in each detection region;
determining whether connected blood flow exists between every two adjacent detection areas;
in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level;
the determining a bleeding situation of the occupant based on the region detection results of the respective detection regions includes:
determining that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold;
determining a severity of bleeding based on the area of blood flow in each of the detection regions, the severity of bleeding being positively correlated with the sum of the areas of blood flow in the respective detection regions.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes:
in response to detecting bleeding of an occupant within the cabin based on the image information, determining a body part of the bleeding and a direction of the blood flow;
based on the body part from which blood flows and the direction of blood flow, the body part at which the starting end of blood flow is located is taken as the bleeding part.
In one possible implementation, the method further includes:
determining the body posture of the passenger in the cabin according to the image information;
and under the condition that the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration, determining the body posture of the passenger in the cabin as the abnormal body posture.
In one possible implementation, the method further includes:
and under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture, determining that the fracture condition exists in the passenger in the cabin.
In one possible implementation, the method further includes:
determining vital sign indicators of the occupant based on the image information, the vital sign indicators including at least one of:
respiratory rate, blood pressure, heart rate;
and sending the vital sign indexes to an emergency call center.
In one possible implementation, the method further includes:
determining the injury severity level of the passenger in the cabin based on at least one of the determined bleeding condition, abnormal body posture and vital sign indexes of the passenger;
transmitting the injury severity level to an emergency call center.
According to an aspect of the present disclosure, there is provided an apparatus for transmitting information to an emergency call center for a vehicle, including:
the image information acquisition unit is used for responding to the triggering of the emergency call and acquiring the image information of the passengers in the cabin;
the bleeding condition detection unit is used for detecting the bleeding condition of passengers in the cabin based on the image information;
and the bleeding condition sending unit is used for responding to the detected bleeding condition and sending the bleeding condition to the emergency call center.
In one possible implementation, the bleeding condition detecting unit includes:
the passenger detection subunit is used for carrying out face detection and/or human body detection on the image information to determine passengers in the cabin;
and the first bleeding condition determining subunit is used for detecting blood on the face and/or body surface of the passenger and determining the bleeding condition of the passenger in the cabin.
In one possible implementation manner, the bleeding condition detecting unit is configured to detect whether the occupant bleeds based on the image information based on color information of blood and shape information of blood flow.
In one possible implementation, the bleeding condition detecting unit includes:
a body surface region detection subunit, configured to detect a body surface region of the passenger in the cabin based on the image information;
a detection region dividing unit for dividing a body surface region of the occupant into a plurality of detection regions;
the area detection result determining subunit is used for detecting blood information in each detection area to obtain an area detection result of each detection area;
a second bleeding condition determining subunit configured to determine a bleeding condition of the occupant based on the region detection results of the respective detection regions.
In a possible implementation manner, the body surface region detection subunit is configured to detect a face surface region of an occupant in the cabin based on the image information;
the detection region dividing unit is used for dividing the human face surface region of the passenger into a plurality of detection regions.
In a possible implementation manner, the region detection result determining subunit is configured to determine, based on the shape and the area of the blood flow in each detection region, a first confidence level that a bleeding situation exists in each detection region; determining whether connected blood flow exists between every two adjacent detection areas; in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level;
the second bleeding condition determining subunit is configured to determine that the occupant bleeds if the first confidence level or the second confidence level exceeds a confidence level threshold; determining a severity of bleeding based on the area of blood flow in each of the detection regions, the severity of bleeding being positively correlated with the sum of the areas of blood flow in the respective detection regions.
In one possible implementation, the bleeding condition detecting unit includes:
a bleeding part detecting subunit for determining a body part of bleeding and a direction of blood flow in response to detection of bleeding of an occupant in the cabin based on the image information;
and a bleeding part detection subunit for taking the body part where the starting end of the blood flow is located as the bleeding part based on the bleeding body part and the direction of the blood flow.
In one possible implementation, the apparatus further includes:
the body posture determining unit is used for determining the body posture of the passenger in the cabin according to the image information;
and the abnormal body posture determining unit is used for determining the body posture of the passenger in the cabin as the abnormal body posture under the condition that the body posture is the preset abnormal body posture and the duration of the abnormal body posture exceeds the set duration.
In one possible implementation, the apparatus further includes:
and the fracture condition detection unit is used for determining that the fracture condition exists in the passenger in the cabin under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture.
In one possible implementation, the apparatus further includes:
a vital sign indicator determination unit configured to determine a vital sign indicator of the occupant based on the image information, the vital sign indicator including at least one of:
respiratory rate, blood pressure, heart rate;
and the vital sign index sending unit is used for sending the vital sign indexes to an emergency call center.
In one possible implementation, the apparatus further includes:
an injury severity level determination unit for determining an injury severity level of an occupant in the cabin based on at least one of the determined bleeding condition, abnormal body posture, vital sign indicators of the occupant;
an injury severity level transmitting unit for transmitting the injury severity level to an emergency call center.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the disclosed embodiment, by being triggered in response to an emergency call, image information of an occupant in a cabin is acquired, and based on the image information, a bleeding condition of the occupant in the cabin is detected, and then in response to detecting the bleeding condition, the bleeding condition is sent to an emergency call center. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center reasonably schedules rescue force according to the injury condition of the passengers in the accident, shortens or omits the inquiry process of the personnel in the call center according to the scene with serious bleeding condition, and rescues in minutes and seconds.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a flow chart of an information transmission method according to an embodiment of the present disclosure;
fig. 2 shows a block diagram of an information transmitting apparatus according to an embodiment of the present disclosure;
FIG. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
As described in the background, emergency call service can reduce rescue time and reduce the mortality of rescued people in a vehicle accident. However, in the related art, it is difficult to determine the damage degree of the accident after the accident occurs, and even impossible to determine the casualty situation of the people in the vehicle, and the rescue center cannot reasonably schedule the rescue force even if there are many emergency calls.
In the disclosed embodiment, by being triggered in response to an emergency call, image information of an occupant in a cabin is acquired, and based on the image information, a bleeding condition of the occupant in the cabin is detected, and then in response to detecting the bleeding condition, the bleeding condition is sent to an emergency call center. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center reasonably schedules rescue force according to the injury condition of the passengers in the accident, shortens or omits the inquiry process of the personnel in the call center according to the scene with serious bleeding condition, and rescues in seconds.
In one possible implementation, the execution subject of the method may be an intelligent driving control device installed on a vehicle. In one possible implementation, the method may be performed by a terminal device or a server or other processing device. The terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a wearable device. The vehicle-mounted device may be a vehicle or a domain controller in a vehicle cabin, or may be a device host used for executing an information sending method in an ADAS (Advanced Driving Assistance System), an OMS (Occupant Monitoring System), or a DMS (Driver Monitoring System). In some possible implementations, the information sending method may be implemented by a processor calling computer readable instructions stored in a memory.
For convenience of description, in one or more embodiments of the present specification, an execution subject of the method for sending information to an emergency call center for a vehicle may be an in-vehicle device in the vehicle, and hereinafter, an embodiment of the method will be described by taking the execution subject as an in-vehicle device as an example. It is understood that the method is carried out by the vehicle-mounted device only for illustrative purposes, and is not to be construed as limiting the method. Fig. 1 shows a flowchart of a method for a vehicle to transmit information to an emergency call center according to an embodiment of the present disclosure, and as shown in fig. 1, the method for the vehicle to transmit information to the emergency call center includes:
in step S11, in response to the emergency call being triggered, acquiring image information of the occupant in the cabin;
the image information here is image information of passengers in a vehicle cabin, and the vehicle here may be at least one of private cars, shared cars, cyber-appointment cars, taxis, trucks and other types of vehicles, and the present disclosure does not limit the specific types of vehicles.
The image information here may be image information of an area where an occupant is located in the cabin, and the image information may be acquired by an on-board image acquisition device provided in or outside the cabin of the vehicle, and the on-board image acquisition device may be an on-board camera or an image acquisition device provided with a camera. The camera can be a camera for collecting image information inside the vehicle or a camera for collecting image information outside the vehicle.
For example, the camera may include a camera in the DMS and/or a camera in the OMS, etc., which may be used to capture image information of the interior of the vehicle; the camera may also include a camera in the ADAS, which may be used to collect image information outside the vehicle. Of course, the vehicle-mounted image capturing device may also be a camera in other systems, or may also be a separately configured camera, and the embodiment of the present disclosure does not limit the specific vehicle-mounted image capturing device.
The carrier of the image information can be a two-dimensional image or video, for example, the image information can be a visible light image/video or an infrared light image/video; the method may also be a three-dimensional image formed by a point cloud scanned by a radar, and the like, which may be determined according to an actual application scenario, and this disclosure does not limit this.
The image information collected by the vehicle-mounted image collecting device can be acquired through the communication connection established between the vehicle-mounted image collecting device and the vehicle-mounted image collecting device. In one example, the vehicle-mounted image capturing device may transmit the captured image information to the vehicle-mounted controller or the remote server through the bus or the wireless communication channel in real time, and the vehicle-mounted controller or the remote server may receive the real-time image information through the bus or the wireless communication channel.
In step S12, a bleeding condition of the passenger in the cabin is detected based on the image information;
the presence of blood on the occupant may be detected based on image processing techniques to determine whether a bleeding condition exists in the occupant. In an example, the bleeding condition of the passenger may be detected through a neural network, or the blood in the image may also be detected through a target detection technology such as threshold segmentation, so as to detect the bleeding condition of the passenger in the cabin.
In step S13, in response to detecting a bleeding condition, the bleeding condition is sent to an emergency call center.
The specific implementation of the blood flow condition sent to the emergency call center may be various, for example, whether the blood flow condition exists or does not exist in the cabin passenger; alternatively, where bleeding is present, more specific bleeding may be signaled, e.g., specific locations where bleeding occurs, severity of bleeding, etc.
Therefore, by sending the bleeding situation to the emergency call center, the emergency call center can determine whether the bleeding situation exists on the emergency call initiator, and provide targeted rescue measures such as carrying corresponding hemostatic supplies, transfusion supplies, and assigning doctors for treating the bleeding situation when the bleeding situation is determined to exist.
In the disclosed embodiment, by being triggered in response to an emergency call, image information of an occupant in a cabin is acquired, and based on the image information, a bleeding condition of the occupant in the cabin is detected, and then in response to detecting the bleeding condition, the bleeding condition is sent to an emergency call center. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center reasonably schedules rescue force according to the injury condition of the passengers in the accident, shortens or omits the inquiry process of the personnel in the call center according to the scene with serious bleeding condition, and rescues in minutes and seconds.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes: carrying out face detection and/or human body detection on the image information to determine passengers in the cabin; and carrying out blood detection on the face and/or the body surface of the passenger to determine the bleeding condition of the passenger in the cabin.
After the image information in the cabin is obtained, human body detection and/or human face detection can be performed on the cabin based on the image information to obtain human body detection results and/or human face detection results in the cabin, and passenger detection results in the cabin can be obtained based on the human body detection results and/or the human face detection results in the cabin. For example, the human body detection result and/or the human face detection result in the cabin may be used as the passenger detection result in the cabin. For another example, the human body detection result and/or the human face detection result in the cabin may be processed to obtain the passenger detection result in the cabin.
In the embodiment of the disclosure, human body detection and/or human face detection is performed on the cabin to obtain a human body detection result and/or a human face detection result in the cabin, wherein the detection result comprises position information of a human body and/or a human face. For example, in the case where one occupant is detected, the occupant detection result includes position information of the occupant; in the case where a plurality of occupants are detected, the occupant detection result may include position information of the detected individual occupants.
The position information of the occupant may be represented using position information of a bounding box of the occupant. Then, blood is detected for the occupant in the image framed by the bounding box. Alternatively, the position information of the occupant may be represented by position information of a boundary contour of the occupant, and then blood of the occupant may be detected in an image surrounded by the boundary contour.
Based on the position of the face of the passenger, which can be obtained by face detection, under the condition that the face is detected, the face of the passenger can be subjected to blood detection to determine the bleeding condition of the passenger in the cabin; the position of the human body of the passenger can be obtained based on human body detection, and when the human body is detected, blood detection can be carried out on the body surface of the passenger to determine the bleeding condition of the passenger in the cabin.
In the embodiment of the disclosure, the passenger in the cabin is determined by performing face detection and/or human body detection on the image information, and then blood detection is performed on the face and/or body surface of the passenger to determine the bleeding condition of the passenger in the cabin. Therefore, the human face detection and/or the human body detection are carried out on the image information firstly, and the blood detection is carried out on the face and/or the body surface of the passenger, so that the image range for carrying out the subsequent blood detection can be reduced, the detection efficiency is improved, the interference of other areas considered by the body of the passenger on the blood detection can be reduced, and the accuracy of the bleeding condition detection is improved.
In particular, the blood detection may be performed based on color information of blood and shape information of blood flow, and then, in a possible implementation, the detecting the bleeding of the passenger in the cabin based on the image information includes: based on the color information of blood and the shape information of blood flow, whether the occupant bleeds is detected based on the image information.
After a passenger has bleeding, the color of the blood is often bright red, and in a computer, the color is expressed by defining color parameters in a color space, a common color space is a red-green-blue (RGB) color space, and in addition, color spaces such as HSL, LMS, CMYK, CIE YUV, hsb (hsv), YCbCr and the like exist in the related art based on standards established by the Commission international de L' Eclairage (CIE) standard colorimetry system. The expression form of the color space is various, different color spaces can have different characteristics, and the color parameters of different color spaces can be mutually converted.
According to the definition of color parameters in different color spaces, corresponding color parameters can be analyzed from the image information, when the image is stored in a computer, the default color space of most images in the computer is an RGB color space, the RGB color space is divided into three color components of red (R), green (G) and blue (B), and the value range of each color component is 0-255. When the computer reads the image information from the storage medium, the image information is read through a digital image processing technology, so that the three-dimensional component of each pixel point in the image information in the default color space can be obtained, and the color parameter of the image information in the default color space can be obtained.
Each pixel point has a color parameter, and different color parameters represent different colors, so that the color of the pixel point can be determined based on the color parameters. For the color of blood, the color parameter may be a range, in an example, taking RGB color space as an example, the range of the color parameter of blood may be represented as (150,0,0) - (250,50,50), when the color range of the pixel point is within the range, the pixel point may be regarded as the color of blood, and further, the verification is performed according to the shape of blood.
In addition, the detection of the bleeding condition can be realized based on a neural network technology, in the process of image processing through the neural network technology, image features are often extracted through image convolution operation, pixel values of the image are extracted in the operation, then, color information of blood is extracted to be represented by high-dimensional features, and the neural network can determine the high-dimensional features belonging to the blood colors according to the high-dimensional features so as to detect the bleeding condition.
Because the color of attachments such as clothes and accessories of the passenger can be similar to the color of blood, whether the passenger bleeds can be further determined according to the shape of the blood flow, and the shape of the blood flow can be obtained according to the shape of the real blood flow in the image.
The blood test can be realized according to a deep neural network, and the neural network for performing the blood test can be a Seq2Seq model based on an attention mechanism, a Tensorflow model and the like. The neural network can be a trained network, and can also be trained by adopting an image data set containing the bleeding condition of the passenger in the image content according to the characteristics of the image information of the bleeding of the passenger, and the neural network is trained by marking the bleeding area in the image data set, so that the accuracy of the neural network is higher when the blood is detected, and the blood on the face and/or the body surface of the passenger can be accurately detected.
In the embodiment of the present disclosure, by detecting whether the passenger bleeds based on the image information based on the color information of blood and the shape information of blood flow, the bleeding situation of the passenger in the cabin can be accurately detected.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes: detecting a body surface area of the passenger in the cabin based on the image information; dividing a body surface area of the occupant into a plurality of detection areas; detecting blood information in each detection area to obtain an area detection result of each detection area; determining a bleeding situation of the occupant based on a region detection result of each of the detection regions.
Based on the image information, a body surface area of the in-cabin occupant can be detected, which can be represented based on coordinates in the image, and for which a plurality of detection areas can be divided. The specific way of dividing the detection region may be various, for example, the body surface region may be divided by a mesh, and the division way may divide the exposed part of the body surface (for example, a human face, a neck, etc.) into meshes with the same area; for another example, the body surface region may be divided according to a human body part in the body surface, arms of the body surface may be divided into a detection region, a chest may be divided into a detection region, and an abdomen may be divided into a detection region, and so on. In addition, the dividing method of the human body surface area can be various, and the disclosure does not limit the dividing method.
After obtaining a plurality of detection areas, blood information may be detected in each detection area, and a specific way of detecting blood information may refer to possible implementation manners provided by the present disclosure, for example, blood may be detected according to a blood color and a blood flow shape, or blood may be detected by a trained neural network, which is not described herein again.
After the blood information is detected in each detection area, the area detection result of each detection area can be obtained, the area detection result can be that bleeding exists or does not exist, and when the bleeding exists, the specific position information of the blood flow in the detection area can be detected by specifying the area where the bleeding exists.
Then, based on the region detection results of all detection regions, the region detection results of all detection regions can be subjected to weighted fusion to determine the bleeding condition of the passenger; or, the detection results of each region may be integrated to determine the confidence level of bleeding of the passenger and the severity of bleeding, which may be specifically referred to a possible implementation manner provided by the present disclosure, and details are not described here.
In the disclosed embodiment, the body surface area of the passenger in the cabin is detected based on the image information; dividing a body surface area of an occupant into a plurality of detection areas; by detecting blood information in each detection area and obtaining area detection results of the detection areas, the bleeding state of the occupant is specified based on the area detection results of the detection areas, and the accuracy of the specified bleeding state can be improved.
In one possible implementation, the detecting a body surface area of the occupant in the cabin based on the image information includes: detecting a face surface area of an occupant in the cabin based on the image information; the dividing of the body surface area of the occupant into a plurality of detection areas includes: dividing the face surface area of the occupant into a plurality of detection regions.
Based on the image information, a face surface area of an occupant in the cabin, which may be represented based on coordinates in the image, may be detected, for which the face surface area may be divided into a plurality of detection areas. The specific way of dividing the face surface region may be various, for example, the face surface region may be divided by a mesh, which may be a mesh with the same area, so that the face surface region may be divided by a mesh with the same area; for another example, the face surface area may be divided according to the positions in the face surface, the forehead may be divided into a detection area, two sides of the nose are respectively divided into a detection area, the mouth and the parts below the mouth are divided into a detection area, and the like. In addition, the dividing mode of the face surface area can be various, and the disclosure does not limit the dividing mode.
In the embodiment of the present disclosure, the accuracy of the determined bleeding situation can be improved by detecting the face surface area of the passenger in the cabin based on the image information and then dividing the face surface area of the passenger into a plurality of detection areas, thereby determining the bleeding situation of the face of the passenger according to the area detection result of each detection area.
In a possible implementation manner, the detecting blood information in each detection area to obtain an area detection result of each detection area includes: determining a first confidence level that a bleeding condition exists in each detection region based on the shape and area of the blood flow in each detection region; determining whether connected blood flow exists between every two adjacent detection areas; in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level; the determining a bleeding situation of the occupant based on the region detection results of the respective detection regions includes: determining that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold; determining a severity of bleeding based on the area of blood flow in each of the detection regions, the severity of bleeding being positively correlated with the sum of the areas of blood flow in the respective detection regions.
When the blood flow is detected in the image information based on the image processing technology, the problem of determining whether pixel points in the image belong to the two categories of the blood flow can be solved based on the image segmentation technology in the deep learning, so that the positions of the pixel points belonging to the blood flow in the detection area can be obtained, and a plurality of pixel points classified into the blood flow are connected to form the blood flow area, namely, a blood flow shape is formed. The area of the blood flow may be the area of a plurality of pixel points classified into the blood flow in the image, or may be converted into the real area on the surface of the passenger body.
Based on the shape and area of the blood flow in each detection region, a first confidence level of the presence of a blood flow condition in each detection region can be determined, in particular based on a neural network, the first confidence level characterizing the degree of confidence that a blood flow condition is present in a single detection region. The first confidence may be positively correlated with the area of blood flow in the region, i.e., the larger the area, the higher the first confidence; the closer the shape of the blood flow in the detection region is to the shape of the true blood flow, the higher the first confidence.
After the first confidence of each detection region is obtained, whether a connected blood flow exists between the detection regions may be further determined, and specifically, the determination may be performed according to the positions of the blood determined in the detection regions, where the positions of the blood in the detection regions may be positions in the image information, and if the positions of the blood are adjacent in the image information, the presence of a connected blood flow between the detection regions may be determined.
Since there is a contiguous blood flow between the detection regions, indicating that the bleeding area of the occupant is larger than the blood flow area in a single detection region, the confidence level that the occupant has bleeding should be further increased, and therefore, in response to determining that there is a contiguous blood flow between a first detection region and an adjacent second detection region in the detection regions, the confidence levels of the plurality of first detection regions and the second detection region are increased to a second confidence level to increase the confidence level that there is a bleeding condition in the first detection region and the second detection region.
For example, the first confidence degrees of the first detection region and the second detection region are 0.6 and 0.7, respectively, and in the case where it is determined that there is a contiguous blood flow in the first detection region and the second detection region, the confidence degrees of the first detection region and the second detection region may be raised to 0.7 and 0.8, respectively, to obtain the second confidence degrees. It should be noted that, the specific magnitude of the first confidence increase may be determined according to actual requirements, and the disclosure does not specifically limit this.
Since the confidence level can represent the confidence level that the passenger has the bleeding situation, a confidence level threshold value can be preset, and the passenger bleeding is determined when the first confidence level or the second confidence level exceeds the confidence level threshold value. The specific setting of the confidence threshold can be determined according to actual requirements, and the disclosure does not specifically limit this.
Further, since a larger area of blood flow indicates a more severe degree of bleeding, in one possible implementation, the severity of bleeding may be determined based on the area of blood flow in each detection region, which is positively correlated with the area and sum of blood flow in each of the detection regions.
In the embodiment of the disclosure, a first confidence coefficient of the bleeding condition of each detection area is determined based on the shape and the area of the blood flow in each detection area; determining whether connected blood flow exists between every two adjacent detection areas; in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level; determining that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold. This can improve the accuracy of the determined bleeding. In addition, the severity of bleeding is determined based on the area of the blood flow in each detection area, and the severity of bleeding is sent to the emergency call center, so that the emergency call center can know the severity of bleeding of a calling party in time, reasonably schedule rescue force and provide targeted rescue.
In one possible implementation manner, the detecting a bleeding condition of an occupant in a cabin based on the image information includes: in response to detecting bleeding of an occupant within the cabin based on the image information, determining a body part of the bleeding and a direction of the blood flow; based on the body part from which blood flows and the direction of blood flow, the body part at which the starting end of blood flow is located is taken as the bleeding part.
The body part on the surface of the passenger can be determined based on the detection of the key points of the human body, so that the body part where the blood is located can be determined, and the blood possibly spans a plurality of body parts after a certain part bleeds due to the liquidity of the blood, so that the direction of the blood flow can be further determined, and the body part where the starting end of the blood flow is located is used as the bleeding part based on the direction of the blood flow.
In the process of determining the blood flow direction, the direction of the gradually increasing blood flow can be determined based on a plurality of video frames in the image information, the blood detection is performed in the plurality of video frames, and the direction of the blood flow can be determined from the blood flow in the plurality of video frames, that is, the direction can be used as the blood flow direction.
In the disclosed embodiment, in response to detecting bleeding of an occupant within the cabin based on the image information, determining a body part of the bleeding and a direction of the blood flow; based on the body part from which blood flows and the direction of blood flow, the body part at which the starting end of blood flow is located is taken as the bleeding part. Therefore, the bleeding part can be accurately determined and sent to the emergency call center as the bleeding condition, so that the emergency call center can determine the bleeding part of a calling party conveniently, and rescue force can be reasonably scheduled according to the bleeding part and the bleeding severity, so that targeted rescue can be provided.
In one possible implementation, the method further includes: determining the body posture of the passenger in the cabin according to the image information; and under the condition that the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration, determining the body posture of the passenger in the cabin as the abnormal body posture.
In the disclosed embodiment, the body posture of the occupant in the cabin can be determined based on the image information, and the body posture of the occupant can be determined by an image recognition technology, for example. The body posture detection may be determined based on human body key detection, and as an example of the implementation, a plurality of human body key points to be detected may be preset, for example, 17 key points included in a human body skeleton may be set to respectively indicate each part of a human body, and by detecting the 17 key points, according to a position relationship between the 17 key points, a position relationship between each part of the human body may be obtained, where the position relationship between each part of the human body is a concrete representation form of the body posture.
As an example of this implementation, the image information may be input into a backbone network, feature extraction may be performed on the image information via the backbone network to obtain a feature map, and then the positions of key points of the human body may be detected based on the feature map to obtain the posture of the human body. The backbone network may adopt network structures such as ResNet, MobileNet, and the like, which is not limited herein.
After the body posture of the passenger in the cabin is determined, whether the body posture is a preset abnormal body posture or not can be judged, and under the condition that the body posture is determined to be the preset abnormal body posture, the health condition of the passenger in the cabin is determined to be an abnormal condition. The preset abnormal body posture comprises at least one of the following items: the body is inclined towards one side, the head is inclined downwards, and the face is upward. Since the posture of the occupant can reflect the health condition of the user's body to some extent, when the occupant is injured after an accident occurs, the occupant may not keep a straight posture, and may assume an abnormal posture such as a posture in which the body is inclined to one side, the head is inclined downward, or the back is upward. These body postures can accurately represent that the current health state of the user is an abnormal condition.
Therefore, the abnormal body postures can be preset, after the body posture of the passenger in the cabin is determined, whether the body state of the passenger in the cabin is the preset abnormal body posture or not can be judged, and under the condition that the body posture is determined to be the preset abnormal body posture, the health condition of the passenger in the cabin is determined to be the abnormal condition.
In addition, in order to improve the accuracy of abnormal situation detection, the body posture of the occupant in the cabin may be determined to be an abnormal body posture in the case where the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration.
In the embodiment of the disclosure, the body posture of the passenger in the cabin is determined according to the image information; and under the condition that the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration, determining the body posture of the passenger in the cabin as the abnormal body posture. Therefore, the abnormal conditions of the passengers can be accurately determined, so that the injury severity level of the passengers can be accurately determined in the subsequent process, and the injury severity level of the passengers is sent to the emergency call center, so that the emergency call center can rescue the seriously injured passengers preferentially.
In one possible implementation, the method further includes: and under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture, determining that the fracture condition exists in the passenger in the cabin.
In the case of a fracture of the occupant, the body posture of the occupant is significantly different from the normal body posture, for example, the non-joint part of the occupant is bent, which can confirm that the fracture of the occupant exists. For example, if there is a bend in the non-joint of the arm, it can be determined that there is a fracture in the arm of the occupant, or if there is a bend in the non-joint of the leg, it can also be determined that there is a fracture in the leg of the occupant.
In one possible implementation, the method further includes: determining vital sign indicators of the occupant based on the image information, the vital sign indicators including at least one of: respiratory rate, blood pressure, heart rate; and sending the vital sign indexes to an emergency call center.
The vital sign indicator may be determined based on physiological characteristic sensing information acquired by a physiological characteristic sensor, for example, taking the physiological characteristic sensor as a millimeter wave radar, the principle of monitoring the respiratory heart rate by the millimeter wave radar is to utilize the radar to emit electromagnetic waves and then detect the frequency of an echo signal to realize detection of the respiratory heart rate of the passenger, in this example, the vital characteristic sensing information is the echo signal of the millimeter wave radar. The millimeter wave radar can detect minute vibration and displacement of the human body by measuring a change in the phase of the echo signal, and in one example, the frequency of heartbeat and respiration can be determined based on the detection of the amplitude of chest vibration.
After the vital sign indexes are determined, the vital sign indexes can be sent to an emergency call center, so that the emergency call center reasonably schedules rescue force to carry out targeted rescue on passengers.
In one possible implementation, the method further includes: determining the injury severity level of the passenger in the cabin based on at least one of the determined bleeding condition, abnormal body posture and vital sign indexes of the passenger; transmitting the injury severity level to an emergency call center.
The injury severity level is used to characterize the severity of injury to the occupant and may be, for example, on a scale of 0-10, with a higher level indicating a greater degree of injury and a level of 0 indicating that the occupant is not injured.
The injury severity level may be one or more, for example, a single injury level may be used to comprehensively characterize the severity of bleeding, abnormal body posture, vital sign indicators, etc. of the occupant, or multiple severity levels may be used to characterize the severity of injury of the occupant separately, for example, a bleeding severity level, a fracture severity level, a vital sign debilitation level, etc. may be preset.
The bleeding severity level can be determined according to the bleeding area and the bleeding part, the bleeding area is positively correlated with the bleeding severity level, and the bleeding part is in the head, abdomen and other critical parts, so the bleeding severity level is higher. The severity level of fracture is positively correlated with the bending degree of human skeleton, and the number of fractured skeleton, and the level of bleeding is higher when the fracture part is in the critical part such as head. The vital sign asthenia level is inversely related to the vital sign indexes, the lower the respiratory frequency, the blood pressure and the heart rate, the higher the vital sign asthenia level is represented,
under the condition that the severity levels of bleeding, abnormal body posture, vital sign indexes and the like of passengers are comprehensively represented by one injury severity level, the severity levels of bleeding, abnormal body posture and vital sign indexes of the passengers can be weighted and averaged to obtain a comprehensive injury severity level; alternatively, the injury severity level may be determined from the most severe one of the occupant's bleeding, abnormal body posture, vital signs indicators.
In the disclosed embodiment, the injury severity level of the passenger in the cabin is determined by determining the injury severity level of the passenger based on at least one of the determined bleeding condition of the passenger, abnormal body posture and vital sign indexes, and the injury severity level is sent to an emergency call center. Therefore, the emergency call center can preferentially rescue passengers of calling parties with high injury severity levels according to the injury severity levels, reduce or omit inquiry processes, provide faster rescue and reduce personal and property loss.
An application scenario of the embodiment of the present disclosure is explained below. In the application scene, after an accident occurs, an emergency call is triggered, then image information of passengers in the cabin is acquired, and if the situation that the passengers in the cabin have no obvious bleeding and the body posture is normal is detected, the casualty situation is judged to be light. After the emergency call is connected, the call center and the personnel in the vehicle simply confirm that additional rescue is not needed.
Yet another application scenario of the disclosed embodiments is described below. In the application scene, after an accident occurs, an emergency call is triggered, then image information of passengers in the cabin is acquired, a large amount of bleeding on the body surfaces of the passengers in the cabin is detected, the limbs of the passengers are static and the breath of the passengers is weak, the serious injury level is determined to be high, and the serious injury level is sent to an emergency call center. The emergency call directly sends the ambulance to the accident site, and simultaneously, the personnel at the call center communicate with the inside of the ambulance to further know the situation.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a device, an electronic device, a computer-readable storage medium, and a program for sending information to an emergency call center for a vehicle, which can all be used to implement any one of the information sending methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 2 shows a block diagram of an information transmitting apparatus according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus 20 includes:
an image information acquiring unit 21 for acquiring image information of an occupant in the cabin in response to an emergency call being triggered;
a bleeding condition detection unit 22 for detecting a bleeding condition of the passenger in the cabin based on the image information;
a bleeding situation sending unit 23, configured to send the bleeding situation to the emergency call center in response to detecting the bleeding situation.
In one possible implementation, the bleeding condition detecting unit includes:
the passenger detection subunit is used for carrying out face detection and/or human body detection on the image information to determine passengers in the cabin;
and the first bleeding condition determining subunit is used for detecting blood on the face and/or body surface of the passenger and determining the bleeding condition of the passenger in the cabin.
In one possible implementation manner, the bleeding condition detecting unit is configured to detect whether the occupant bleeds based on the image information based on color information of blood and shape information of blood flow.
In one possible implementation, the bleeding condition detecting unit includes:
a body surface region detection subunit, configured to detect a body surface region of the passenger in the cabin based on the image information;
a detection region dividing unit for dividing a body surface region of the occupant into a plurality of detection regions;
the area detection result determining subunit is used for detecting blood information in each detection area to obtain an area detection result of each detection area;
a second bleeding condition determining subunit configured to determine a bleeding condition of the occupant based on the region detection results of the respective detection regions.
In a possible implementation manner, the body surface region detection subunit is configured to detect a face surface region of an occupant in the cabin based on the image information;
the detection region dividing unit is used for dividing the human face surface region of the passenger into a plurality of detection regions.
In a possible implementation manner, the region detection result determining subunit is configured to determine, based on the shape and the area of the blood flow in each detection region, a first confidence level that a bleeding situation exists in each detection region; determining whether connected blood flow exists between every two adjacent detection areas; in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level;
the second bleeding condition determining subunit is configured to determine that the occupant bleeds if the first confidence level or the second confidence level exceeds a confidence level threshold; determining a severity of bleeding based on the area of blood flow in each of the detection regions, the severity of bleeding being positively correlated with the sum of the areas of blood flow in the respective detection regions.
In one possible implementation, the bleeding condition detecting unit includes:
a bleeding part detecting subunit for determining a body part of bleeding and a direction of blood flow in response to detection of bleeding of an occupant in the cabin based on the image information;
and a bleeding part detection subunit for taking the body part where the starting end of the blood flow is located as the bleeding part based on the bleeding body part and the direction of the blood flow.
In one possible implementation, the apparatus further includes:
the body posture determining unit is used for determining the body posture of the passenger in the cabin according to the image information;
and the abnormal body posture determining unit is used for determining the body posture of the passenger in the cabin as the abnormal body posture under the condition that the body posture is the preset abnormal body posture and the duration of the abnormal body posture exceeds the set duration.
In one possible implementation, the apparatus further includes:
and the fracture condition detection unit is used for determining that the fracture condition exists in the passenger in the cabin under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture.
In one possible implementation, the apparatus further includes:
a vital sign indicator determination unit configured to determine a vital sign indicator of the occupant based on the image information, the vital sign indicator including at least one of:
respiratory rate, blood pressure, heart rate;
and the vital sign index sending unit is used for sending the vital sign indexes to an emergency call center.
In one possible implementation, the apparatus further includes:
an injury severity level determination unit for determining an injury severity level of an occupant in the cabin based on at least one of the determined bleeding condition, abnormal body posture, vital sign indicators of the occupant;
an injury severity level transmitting unit for transmitting the injury severity level to an emergency call center.
In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementations and technical effects thereof may refer to the description of the above method embodiments, which are not described herein again for brevity.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. A method for a vehicle to send information to an emergency call center, comprising:
acquiring image information of passengers in the cabin in response to the triggering of the emergency call;
detecting the bleeding condition of the passenger in the cabin based on the image information;
in response to detecting a bleeding condition, sending the bleeding condition to an emergency call center.
2. The method of claim 1, wherein detecting a bleeding condition of the passenger in the cabin based on the image information comprises:
carrying out face detection and/or human body detection on the image information to determine passengers in the cabin;
and carrying out blood detection on the face and/or the body surface of the passenger to determine the bleeding condition of the passenger in the cabin.
3. The method according to any one of claims 1 or 2, wherein the detecting a bleeding condition of the passenger in the cabin based on the image information comprises:
based on the color information of blood and the shape information of blood flow, whether the occupant bleeds is detected based on the image information.
4. The method according to any one of claims 1-3, wherein said detecting a bleeding condition of the occupant in the cabin based on said image information comprises:
detecting a body surface area of the passenger in the cabin based on the image information;
dividing a body surface area of the occupant into a plurality of detection areas;
detecting blood information in each detection area to obtain an area detection result of each detection area;
determining a bleeding situation of the occupant based on a region detection result of each of the detection regions.
5. The method of claim 4, wherein the detecting a body surface area of the occupant within the cabin based on the image information comprises:
detecting a face surface area of an occupant in the cabin based on the image information;
the dividing of the body surface area of the occupant into a plurality of detection areas includes:
dividing the face surface area of the occupant into a plurality of detection regions.
6. The method according to claim 4 or 5, wherein the detecting blood information in each detection area to obtain the area detection result of each detection area comprises:
determining a first confidence level that a bleeding condition exists in each detection region based on the shape and area of the blood flow in each detection region;
determining whether connected blood flow exists between every two adjacent detection areas;
in response to determining that there is blood flow bordering a first detection region of the detection regions to an adjacent second detection region, raising the confidence levels of the plurality of first detection regions and the second detection region to a second confidence level;
the determining a bleeding situation of the occupant based on the region detection results of the respective detection regions includes:
determining that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold;
determining a severity of bleeding based on the area of blood flow in each of the detection regions, the severity of bleeding being positively correlated with the sum of the areas of blood flow in the respective detection regions.
7. The method according to any one of claims 1-6, wherein said detecting a bleeding condition of the occupant in the cabin based on said image information comprises:
in response to detecting bleeding of an occupant within the cabin based on the image information, determining a body part of the bleeding and a direction of the blood flow;
based on the body part from which blood flows and the direction of blood flow, the body part at which the starting end of blood flow is located is taken as the bleeding part.
8. The method of any one of claims 1-7, further comprising:
determining the body posture of the passenger in the cabin according to the image information;
and under the condition that the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration, determining the body posture of the passenger in the cabin as the abnormal body posture.
9. The method of claim 8, further comprising:
and under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture, determining that the fracture condition exists in the passenger in the cabin.
10. The method according to any one of claims 1-9, further comprising:
determining vital sign indicators of the occupant based on the image information, the vital sign indicators including at least one of:
respiratory rate, blood pressure, heart rate;
and sending the vital sign indexes to an emergency call center.
11. The method of any one of claims 1-10, further comprising:
determining the injury severity level of the passenger in the cabin based on at least one of the determined bleeding condition, abnormal body posture and vital sign indexes of the passenger;
transmitting the injury severity level to an emergency call center.
12. An apparatus for a vehicle to transmit information to an emergency call center, comprising:
the image information acquisition unit is used for responding to the triggering of the emergency call and acquiring the image information of the passengers in the cabin;
the bleeding condition detection unit is used for detecting the bleeding condition of passengers in the cabin based on the image information;
and the bleeding condition sending unit is used for responding to the detected bleeding condition and sending the bleeding condition to the emergency call center.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 11.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 11.
CN202111016361.1A 2021-08-31 2021-08-31 Method and device for sending information to emergency call center for vehicle Pending CN113743290A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111016361.1A CN113743290A (en) 2021-08-31 2021-08-31 Method and device for sending information to emergency call center for vehicle
KR1020247009195A KR20240046910A (en) 2021-08-31 2022-02-25 Method and device for transmitting information to vehicle emergency call center
PCT/CN2022/078010 WO2023029407A1 (en) 2021-08-31 2022-02-25 Method and apparatus for vehicle to send information to emergency call center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016361.1A CN113743290A (en) 2021-08-31 2021-08-31 Method and device for sending information to emergency call center for vehicle

Publications (1)

Publication Number Publication Date
CN113743290A true CN113743290A (en) 2021-12-03

Family

ID=78734440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016361.1A Pending CN113743290A (en) 2021-08-31 2021-08-31 Method and device for sending information to emergency call center for vehicle

Country Status (3)

Country Link
KR (1) KR20240046910A (en)
CN (1) CN113743290A (en)
WO (1) WO2023029407A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023029407A1 (en) * 2021-08-31 2023-03-09 上海商汤智能科技有限公司 Method and apparatus for vehicle to send information to emergency call center

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590762A (en) * 2017-08-24 2018-01-16 成都安程通科技有限公司 The system that a kind of vehicle-mounted accident reports automatically
CN109690609A (en) * 2017-03-08 2019-04-26 欧姆龙株式会社 Passenger's auxiliary device, method and program
CN113168772A (en) * 2018-11-13 2021-07-23 索尼集团公司 Information processing apparatus, information processing method, and program

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112014000934T5 (en) * 2013-02-21 2016-01-07 Iee International Electronics & Engineering S.A. Imaging-based occupant monitoring system with broad functional support
JP6351323B2 (en) * 2014-03-20 2018-07-04 オリンパス株式会社 Image processing apparatus, image processing method, and image processing program
CN105760885A (en) * 2016-02-22 2016-07-13 中国科学院自动化研究所 Bloody image detection classifier implementing method, bloody image detection method and bloody image detection system
CN106373137B (en) * 2016-08-24 2019-01-04 安翰光电技术(武汉)有限公司 Hemorrhage of digestive tract image detecting method for capsule endoscope
CN107332875A (en) * 2017-05-27 2017-11-07 上海与德科技有限公司 A kind of net about car monitoring method and system, cloud server
CN107563933A (en) * 2017-08-24 2018-01-09 成都安程通科技有限公司 A kind of method that vehicle-mounted accident reports automatically
KR102578072B1 (en) * 2017-12-11 2023-09-14 삼성메디슨 주식회사 Ultrasound diagnositic apparatus and controlling mehtod of the same
WO2020136658A1 (en) * 2018-12-28 2020-07-02 Guardian Optical Technologies Ltd Systems, devices and methods for vehicle post-crash support
CN113011290A (en) * 2021-03-03 2021-06-22 上海商汤智能科技有限公司 Event detection method and device, electronic equipment and storage medium
CN113763670A (en) * 2021-08-31 2021-12-07 上海商汤临港智能科技有限公司 Alarm method and device, electronic equipment and storage medium
CN113766479A (en) * 2021-08-31 2021-12-07 上海商汤临港智能科技有限公司 Method and device for transmitting passenger information to rescue call center for vehicle
CN113743290A (en) * 2021-08-31 2021-12-03 上海商汤临港智能科技有限公司 Method and device for sending information to emergency call center for vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109690609A (en) * 2017-03-08 2019-04-26 欧姆龙株式会社 Passenger's auxiliary device, method and program
CN107590762A (en) * 2017-08-24 2018-01-16 成都安程通科技有限公司 The system that a kind of vehicle-mounted accident reports automatically
CN113168772A (en) * 2018-11-13 2021-07-23 索尼集团公司 Information processing apparatus, information processing method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023029407A1 (en) * 2021-08-31 2023-03-09 上海商汤智能科技有限公司 Method and apparatus for vehicle to send information to emergency call center

Also Published As

Publication number Publication date
KR20240046910A (en) 2024-04-11
WO2023029407A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
CN112141119B (en) Intelligent driving control method and device, vehicle, electronic equipment and storage medium
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN112124073B (en) Intelligent driving control method and device based on alcohol detection
CN113763670A (en) Alarm method and device, electronic equipment and storage medium
CN111325129A (en) Traffic tool commuting control method and device, electronic equipment, medium and vehicle
CN112096222B (en) Trunk control method and device, vehicle, electronic device and storage medium
CN111243105B (en) Augmented reality processing method and device, storage medium and electronic equipment
CN111399742B (en) Interface switching method and device and electronic equipment
CN112001348A (en) Method and device for detecting passenger in vehicle cabin, electronic device and storage medium
CN113486765A (en) Gesture interaction method and device, electronic equipment and storage medium
CN113486760A (en) Object speaking detection method and device, electronic equipment and storage medium
CN113766479A (en) Method and device for transmitting passenger information to rescue call center for vehicle
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN112036303A (en) Method and device for reminding left-over article, electronic equipment and storage medium
CN114299587A (en) Eye state determination method and apparatus, electronic device, and storage medium
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN113920492A (en) Method and device for detecting people in vehicle, electronic equipment and storage medium
CN113887474A (en) Respiration rate detection method and device, electronic device and storage medium
WO2023029407A1 (en) Method and apparatus for vehicle to send information to emergency call center
US11978231B2 (en) Wrinkle detection method and terminal device
WO2023273060A1 (en) Dangerous action identifying method and apparatus, electronic device, and storage medium
CN113989889A (en) Shading plate adjusting method and device, electronic equipment and storage medium
CN110363695B (en) Robot-based crowd queue control method and device
CN113505674B (en) Face image processing method and device, electronic equipment and storage medium
CN114495072A (en) Occupant state detection method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40058731

Country of ref document: HK