CN113591701A - Respiration detection area determination method and device, storage medium and electronic equipment - Google Patents

Respiration detection area determination method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113591701A
CN113591701A CN202110870587.1A CN202110870587A CN113591701A CN 113591701 A CN113591701 A CN 113591701A CN 202110870587 A CN202110870587 A CN 202110870587A CN 113591701 A CN113591701 A CN 113591701A
Authority
CN
China
Prior art keywords
region
information
area
target
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110870587.1A
Other languages
Chinese (zh)
Inventor
覃德智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110870587.1A priority Critical patent/CN113591701A/en
Publication of CN113591701A publication Critical patent/CN113591701A/en
Priority to PCT/CN2022/098521 priority patent/WO2023005469A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a respiration detection region determination method, a respiration detection region determination device, a storage medium, and an electronic apparatus. The method comprises the steps of acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object; extracting a first region in the first visible light image, the first region pointing to an actual breathing region of the target subject; acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area of which the temperature changes periodically along with the breathing of the target object; determining a second region in the first visible light image according to the mapping relation between the first region and the target, wherein the second region points to the key region; a breath detection region is determined in said first thermal image based on said second region. The present disclosure can accurately determine in the thermal image the regions that can be used to detect respiratory rate.

Description

Respiration detection area determination method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for determining a respiration detection area, a storage medium, and an electronic device.
Background
Contact detection equipment is generally used for detecting the respiratory rate, and the equipment has limited application scenes, for example, for scenes with isolation requirements, scenes without perception of a detected object are required, and the equipment cannot be used. Therefore, contactless respiratory rate detection is an important direction for the development of the respiratory rate detection field. Because the breathing sensor can not be in direct contact with a detected object, how to accurately position a breathing detection area is very important, and the accurate positioning of the breathing detection area is a precondition for carrying out non-contact breathing frequency detection.
Disclosure of Invention
In order to solve at least one technical problem mentioned above, the present disclosure proposes a respiration detection region determination method, apparatus, storage medium, and electronic device.
According to an aspect of the present disclosure, there is provided a respiration detection region determination method including: acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object; extracting a first region in the first visible light image, the first region being directed to an actual breathing region of the target subject; acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area of which the temperature changes periodically along with the breathing of the target object; determining a second region in the first visible light image according to the mapping relation between the first region and the target, wherein the second region points to the key region; a respiration detection region is determined in the first thermal image from the second region. Based on the above configuration, an area that can be used to detect the breathing frequency can be accurately determined in the thermal image, and by performing temperature analysis on this area, the breathing frequency of the target subject can be further detected.
In some possible embodiments, the obtaining the target mapping relationship includes: acquiring scene mapping information and mapping relation management information, wherein the scene mapping information represents the corresponding relation between the scene characteristic information and the scene category, and the mapping relation management information represents the corresponding relation between the scene category and the mapping relation; determining target scene characteristic information corresponding to the target object; obtaining a target scene category corresponding to the target scene characteristic information according to the target scene characteristic information and the scene mapping information; and obtaining the target mapping relation according to the target scene category and the mapping management information. Based on the configuration, the target mapping relation can be automatically obtained through automatic adaptation for different scenes, so that the breathing detection area can be accurately determined in various scenes.
In some possible embodiments, the determining target scene characteristic information corresponding to the target object includes: acquiring a target visible light image including the target object; performing multi-scale feature extraction on the target visible light image to obtain feature extraction results of multiple levels; fusing the feature extraction results according to the hierarchy increasing sequence to obtain feature fusion results of a plurality of hierarchies; and fusing the feature fusion results according to the descending order of the levels to obtain the feature information of the target scene. Based on the configuration, the target scene feature information not only contains richer feature information, but also contains sufficient context information in a bidirectional fusion mode.
In some possible embodiments, the target mapping includes direction mapping information characterizing a direction of the key region relative to the actual breathing region, and the determining a second region in the first visible light image according to the first region and the target mapping includes: and determining the second area according to the direction mapping information and the first area. Based on the configuration, the second area pointing to the key area can be accurately obtained, and the positioning accuracy of the respiration detection area is improved.
In some possible embodiments, the target mapping relationship further includes distance mapping information characterizing distances of the critical regions relative to the actual breathing region, and the determining the second region according to the direction mapping information and the first region includes: and determining the second area according to the direction mapping information, the distance mapping information and the first area. Based on the above configuration, the positioning accuracy of the second region can be further improved.
In some possible embodiments, the determining the second region according to the direction mapping information, the distance mapping information, and the first region includes: acquiring preset appearance information, wherein the appearance information comprises area size information and/or area shape information; determining the second area such that an outer shape of the second area conforms to the outer shape information, and a direction of a center of the second area with respect to a center of the first area conforms to the direction mapping information, and a distance of the center of the second area with respect to the center of the first area conforms to the distance mapping information. Based on the above configuration, the positioning accuracy of the second region can be further improved.
In some possible embodiments, the determining a respiration detection region in the first thermal image from the second region comprises: acquiring a homography matrix representing a corresponding relation between pixel points of the first visible light image and pixel points of the first thermal image; and determining the respiration detection area according to the homography matrix and the second area. Based on the configuration, the respiration detection area can be accurately obtained according to the second area, and the positioning accuracy of the respiration detection area is improved.
In some possible embodiments, the determining the respiration detection region from the homography matrix and the second region includes: determining, in the first thermal image, a region of interest that matches the second region according to the homography matrix; dividing the associated region to obtain at least two candidate regions; and determining the candidate area with the highest temperature change degree as the respiration detection area. Based on the above configuration, by laterally analogizing the respective candidate regions, the respiration detection region with the highest degree of temperature change can be obtained. The detection of the respiratory frequency is carried out based on the respiratory detection area, so that the detection result is interfered by less noise and is more accurate.
In some possible embodiments, the method further comprises: determining the highest temperature and the lowest temperature of the candidate area within a preset time interval; and obtaining the temperature change degree of the candidate area according to the difference value of the highest temperature and the lowest temperature. Based on the above configuration, the degree of temperature change of the candidate region can be accurately evaluated.
In some possible embodiments, the extracting a first region in the first visible light image includes: based on a neural network, extracting a breathing region of the first visible light image to obtain a first region; the neural network is obtained based on the following method: acquiring a sample visible light image and a label corresponding to the sample visible light image; the label points to a breathing region in the sample visible light image; the breathing area is an oronasal area or a mask area of a sample target object in the sample visible light image; predicting a breathing area of the sample visible light image based on the neural network to obtain a prediction result of the breathing area; and training the neural network according to the respiratory region prediction result and the label. Based on the configuration, the trained neural network can have the capability of directly and accurately extracting the breathing region.
In some possible embodiments, the predicting a breathing region based on the neural network for the sample visible light image to obtain a breathing region prediction result includes: performing feature extraction on the sample visible light image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; wherein, the performing the feature extraction on the sample visible light image to obtain a feature extraction result comprises: performing initial feature extraction on the sample visible light image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result. Based on the configuration, the significant features can be filtered, the composite feature extraction including the channel information extraction is performed based on the filtering result, the information with discrimination power is fully mined, the effectiveness and discrimination power of the second feature information are improved, and the richness of the information in the final feature extraction result is further improved.
In some possible embodiments, the method further comprises: extracting first temperature information corresponding to the respiration detection area from the first thermal image, wherein the first temperature information represents temperature information corresponding to the key area at a first moment. Based on the configuration, the temperature corresponding to the respiration detection area can be accurately determined.
In some possible embodiments, the extracting first temperature information corresponding to the respiration detection region in the first thermal image includes: determining temperature information corresponding to a related pixel point in the respiration detection area; and calculating the first temperature information according to the temperature information corresponding to each related pixel point. Based on the above configuration, the first temperature information can be accurately determined.
In some possible embodiments, the method further comprises: acquiring at least one piece of second temperature information, wherein the second temperature information represents temperature information corresponding to the key area at a second moment different from the first moment; determining a breathing frequency of the target object according to the first temperature information and the at least one second temperature information. Based on the above configuration, by determining the first temperature information in combination with other temperature information, the breathing frequency of the target object can be determined without contact.
In some possible embodiments, the determining the breathing frequency of the target subject according to the first temperature information and the at least one second temperature information includes: arranging the first temperature information and the at least one second temperature information according to a time sequence to obtain a temperature sequence; denoising the temperature sequence to obtain a target temperature sequence; determining a breathing frequency of the target subject based on the target temperature sequence. Based on the configuration, the noise which influences the respiratory frequency calculation can be filtered, so that the obtained respiratory frequency is more accurate.
In some possible embodiments, the determining the breathing frequency of the target subject based on the target temperature sequence includes: determining each key point in the target temperature sequence, wherein the key points are all peak points or all valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; and determining the respiratory frequency according to the time interval. Based on the above configuration, by calculating the time interval between adjacent key points, the breathing frequency can be accurately determined.
According to a second aspect of the present disclosure, there is provided a respiration detection region determination apparatus, the apparatus comprising: the image acquisition module is used for acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object; a first region extraction module, configured to extract a first region in the first visible light image, where the first region is directed to an actual breathing region of the target subject; the mapping determination module is used for acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area with temperature changing periodically along with the breathing of the target object; a second region extraction module, configured to determine a second region in the first visible light image according to the mapping relationship between the first region and the target, where the second region points to the key region; a respiration detection region determination module to determine a respiration detection region in the first thermal image from the second region.
In some possible embodiments, the mapping determining module includes: the device comprises a mapping information determining unit and a mapping relation managing unit, wherein the mapping information determining unit is used for acquiring scene mapping information and mapping relation managing information, the scene mapping information represents the corresponding relation between the scene characteristic information and the scene category, and the mapping relation managing information represents the corresponding relation between the scene category and the mapping relation; the target scene characteristic information determining unit is used for determining target scene characteristic information corresponding to the target object; the target scene category determining module is used for obtaining a target scene category corresponding to the target scene characteristic information according to the target scene characteristic information and the scene mapping information; and the target mapping relation determining module is used for obtaining the target mapping relation according to the target scene category and the mapping management information.
In some possible embodiments, the target scene characteristic information determining unit is configured to acquire a target visible light image including the target object; performing multi-scale feature extraction on the target visible light image to obtain feature extraction results of multiple levels; fusing the feature extraction results according to the hierarchy increasing sequence to obtain feature fusion results of a plurality of hierarchies; and fusing the feature fusion results according to the descending order of the levels to obtain the feature information of the target scene.
In some possible embodiments, the target mapping relationship includes direction mapping information, the direction mapping information characterizes a direction of the key region relative to the actual breathing region, and the second region extraction module is configured to determine the second region according to the direction mapping information and the first region.
In some possible embodiments, the target mapping relationship further includes distance mapping information, the distance mapping information characterizes a distance of the key region relative to the actual breathing region, and the second region extraction module is configured to determine the second region according to the direction mapping information, the distance mapping information, and the first region.
In some possible embodiments, the second region extraction module is further configured to obtain preset shape information, where the shape information includes region size information and/or region shape information; determining the second area such that an outer shape of the second area conforms to the outer shape information, and a direction of a center of the second area with respect to a center of the first area conforms to the direction mapping information, and a distance of the center of the second area with respect to the center of the first area conforms to the distance mapping information.
In some possible embodiments, the respiration detection region determining module is configured to acquire a homography matrix, where the homography matrix represents a correspondence between pixel points of the first visible light image and pixel points of the first thermal image; and determining the respiration detection area according to the homography matrix and the second area.
In some possible embodiments, the respiration detection region determination module is further configured to determine a correlation region in the first thermal image that matches the second region according to the homography matrix; dividing the associated region to obtain at least two candidate regions; and determining the candidate area with the highest temperature change degree as the respiration detection area.
In some possible embodiments, the respiration detection region determination module is further configured to determine a maximum temperature and a minimum temperature of the candidate region within a preset time interval; and obtaining the temperature change degree of the candidate area according to the difference value of the highest temperature and the lowest temperature.
In some possible embodiments, the first region extraction module is configured to perform respiratory region extraction on the first visible light image based on a neural network, so as to obtain the first region; the device also comprises a neural network training module, a data processing module and a data processing module, wherein the neural network training module is used for acquiring the sample visible light image and the label corresponding to the sample visible light image; the label points to a breathing region in the sample visible light image; the breathing area is an oronasal area or a mask area of a sample target object in the sample visible light image; predicting a breathing area of the sample visible light image based on the neural network to obtain a prediction result of the breathing area; and training the neural network according to the respiratory region prediction result and the label.
In some possible embodiments, the neural network training module is configured to perform feature extraction on the sample visible light image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; the neural network training module is further used for performing initial feature extraction on the sample visible light image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
In some possible embodiments, the apparatus further includes a temperature information determining module, configured to extract, in the first thermal image, first temperature information corresponding to the breath detection area, where the first temperature information is indicative of temperature information corresponding to the key area at a first time.
In some possible embodiments, the temperature information determining module is further configured to determine temperature information corresponding to a relevant pixel point in the breath detection area; and calculating the first temperature information according to the temperature information corresponding to each related pixel point.
In some possible embodiments, the apparatus further includes a breathing frequency determination module configured to obtain at least one second temperature information, where the second temperature information represents temperature information corresponding to the critical area at a second time different from the first time; determining a breathing frequency of the target object according to the first temperature information and the at least one second temperature information.
In some possible embodiments, the breathing frequency determination module is configured to arrange the first temperature information and the at least one second temperature information in a time sequence to obtain a temperature sequence; denoising the temperature sequence to obtain a target temperature sequence; determining a breathing frequency of the target subject based on the target temperature sequence.
In some possible embodiments, the breathing frequency determination module is configured to determine each key point in the target temperature sequence, where the key points are both peak points or both valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; and determining the respiratory frequency according to the time interval.
According to a third aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of breath detection area determination according to any one of the first aspect by executing the instructions stored by the memory.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored therein at least one instruction or at least one program, the at least one instruction or at least one program being loaded and executed by a processor to implement the respiration detection region determination method according to any one of the first aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive efforts.
Fig. 1 shows a flow diagram of a method of breath detection area determination according to an embodiment of the present disclosure;
fig. 2 shows a schematic diagram of a registration scenario according to an embodiment of the present disclosure;
FIG. 3 shows a schematic view of registration effect according to an embodiment of the present disclosure;
FIG. 4 shows a schematic flow diagram of a neural network training method in accordance with an embodiment of the present disclosure;
FIG. 5 shows a schematic flow diagram of a feature extraction method according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a method for obtaining a target mapping relationship according to an embodiment of the disclosure;
fig. 7 is a schematic flowchart illustrating a method for determining target scene characteristic information corresponding to a target object according to an embodiment of the disclosure;
FIG. 8 shows a schematic diagram of a feature extraction network according to an embodiment of the present disclosure;
fig. 9 shows a schematic flow diagram of a method of determining a breath detection area in a first thermal image according to an embodiment of the present disclosure;
FIG. 10 shows a schematic flow diagram of a respiratory rate determination method according to an embodiment of the present disclosure;
fig. 11 shows a block diagram of a breath detection region determination apparatus according to an embodiment of the present disclosure;
FIG. 12 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 13 shows a block diagram of another electronic device in accordance with an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments in the present description, belong to the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The embodiment of the disclosure provides a respiration detection area determination method, which can analyze a respiration detection area based on a visible light image and a thermal image matched with the visible light image, wherein the change of the temperature of the respiration detection area can reflect the respiration frequency of a target object in the visible light image. By extracting and analyzing the temperature of the respiration detection area, the respiration frequency of the target object can be accurately obtained under the condition of not directly contacting the target object, so that the objective requirement of people on non-contact respiration frequency detection is met. The embodiments of the present disclosure may be used in various specific scenarios requiring contactless detection of respiratory rate, and the embodiments of the present disclosure are not particularly limited to the specific scenarios. For example, in a scene needing isolation, in a scene with dense people flow, in some public places with special requirements, and the like, the method provided by the embodiment of the disclosure can be used for determining the respiration detection area, and then the respiration frequency is determined based on the determined respiration detection area, so that the contactless respiration frequency detection is realized.
The method for determining the respiration detection area provided by the embodiment of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the breath detection region determination method may be implemented by a processor invoking computer readable instructions stored in a memory. The following describes a method for determining a respiration detection region according to an embodiment of the present disclosure, taking an electronic device as an execution subject.
Fig. 1 shows a schematic flow diagram of a method for determining a respiration detection region according to an embodiment of the present disclosure, as shown in fig. 1, the method includes:
s101, acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object.
In the embodiment of the present disclosure, a target object may be captured by a visible light imaging device to obtain at least two visible light images, where the at least two visible light images may include the first visible light image and may also include at least one second visible light image later. The target object may be captured by a thermal imaging device to obtain at least two thermal images, which may include the first thermal image and may also include at least one second thermal image.
At the same time, the visible light photographing device and the thermal imaging device can photograph the target object at the same time to obtain a visible light image and a thermal image which have a matching relationship. In step S101, the visible light imaging device and the thermal imaging device may be triggered to capture a target object at a first time, so as to obtain the first thermal image and the first visible light image having a matching relationship. In the following description, it is understood that, at each second time, the visible light imaging device and the thermal imaging device capture the target object, and a corresponding second visible light image and a corresponding second thermal image having a matching relationship can be obtained. The second time may be plural, and the second time is a different time from the first time, and the different second time is a different time.
For example, the first time is time a, one of the second times is time B, and the other of the second times is time C, then the embodiments of the present disclosure may acquire the first thermal image AR and the second visible-light image AL having a matching relationship at time a, the second thermal image BA and the second visible-light image BL having a matching relationship at time B, and the second thermal image CA and the second visible-light image CL having a matching relationship at time C.
The first thermal image and the first visible light image matched with the first thermal image are taken as examples for detailed description in the embodiments of the present disclosure. Each pixel point in the first thermal image corresponds to temperature information, and the temperature information can represent the temperature of the actual physical position corresponding to the pixel point. The matching relationship between the first thermal image and the first visible-light image can be understood as a definite correspondence between the pixel points of the first visible-light image and the pixel points of the first thermal image, and the correspondence can be expressed in the form of a homography matrix. For example, for the pixel a1 in the first visible light image, the corresponding pixel b1 may be determined in the first thermal image according to the homography matrix, and then it may be considered that the pixel a1 and the pixel b1 correspond to the same actual physical location, and according to the temperature information corresponding to the pixel b1, the temperature at the actual physical location may be determined.
In order to accurately obtain the homography matrix, before step S101, the thermal imaging apparatus and the visible light imaging apparatus may be registered to obtain the homography matrix. The purpose of implementing the above registration in the embodiment of the present disclosure is to consider that, when a target object is in a preset space, correspondence between pixel points between a visible light image and a thermal image obtained by shooting the target object by a thermal imaging device and a visible light device after the registration both conform to the homography matrix, and the correspondence does not change regardless of whether the target object is stationary or moving.
Referring to fig. 2, a schematic diagram of a registration scenario according to an embodiment of the present disclosure is shown. The thermal imaging device 1 and the visible light imaging device 2 are both facing the registration reference object, and the thermal imaging device 1 and the visible light imaging device 2 may be located on the same horizontal line or vertical line, thereby forming a stacked design. The distances between the thermal imaging device 1 and the visible light photographing device 2 and the registration reference object are smaller than a first distance threshold, and the distance between the thermal imaging device and the visible light photographing device is smaller than a second distance threshold. The first preset distance and the second preset distance may be set according to a registration requirement in a preset space, and the embodiment of the present disclosure is not particularly limited. Illustratively, the first predetermined distance may be 1-2 meters, and the second predetermined distance may be 20-30 centimeters. After the thermal imaging device 1 and the visible light imaging device 2 in fig. 2 are registered, when an object in the preset space is photographed, the obtained visible light image and the thermal image are matched regardless of whether the object is still or moving, and the matching relationship conforms to the homography matrix. The above-mentioned registration reference object is used for performing the above-mentioned registration, and after the registration, both the thermal imaging device 1 and the visible light imaging device 2 can capture a target object, which is located in the above-mentioned preset space when captured, to obtain the image used in step S101 or an image to be used later.
In one possible embodiment, the first video stream output by the registered visible light camera device may be acquired, and all the frame images of the first video stream are visible light images. And acquiring a second video stream output by the thermal imaging device after registration, wherein frame images of the second video stream are thermal images. A first visible-light image and at least one second visible-light image desired from the following text may be determined in the first video stream, and a first thermal image and at least one second thermal image desired from the following text may be determined in the second video stream.
Please refer to fig. 3, which illustrates a schematic diagram of registration effect according to an embodiment of the present disclosure. In fig. 3, the first row of left and right images respectively represent a schematic comparison diagram of the first visible light image and the first thermal image when the target object is located in the middle of the preset space, and the shadow in the first thermal image represents temperature information of the location of the target object. In fig. 3, the second row of left and right images respectively represent a comparison schematic diagram of the first visible light image and the first thermal image when the target object is located at the left portion of the preset space. In fig. 3, the third row of left and right images respectively represent a comparison schematic diagram of the first visible light image and the first thermal image when the target object is located at the right portion of the preset space. As can be seen from fig. 3, no matter where the target object is located in the preset space, the corresponding matching relationship between the first thermal image and the first visible-light image is not changed.
The embodiments of the present disclosure are directed to determining a respiration detection region, and further detecting a respiration rate, where the respiration rate is a physiological parameter, and the target object is a corresponding organism, for example, the target object is a human example for details.
S102: a first region is extracted in the first visible light image, the first region pointing to an actual breathing region of the target subject.
The actual breathing area of the target subject in the embodiments of the present disclosure may be an oral-nasal area or a mask area when the mask is worn by the target subject, and the oral-nasal area may be understood as an oral area or a nasal area, and may also be understood as including an oral area and a nasal area. The embodiment of the present disclosure does not limit the specific extraction manner of the first region, and may extract manually or automatically. In one embodiment, the first visible light image may be subjected to respiratory region extraction based on a neural network, so as to obtain the first region. The number of target objects and the number of first regions are not limited in the embodiments of the present disclosure, and a single first region is described as an example hereinafter.
In one embodiment, please refer to fig. 4, which shows a schematic flow chart of a neural network training method according to an embodiment of the present disclosure, including:
s201: and acquiring a sample visible light image and a label corresponding to the sample visible light image.
In the embodiment of the present disclosure, the label points to a breathing region in the sample visible light image; the breathing area is an oronasal area or a mask area of the sample target object in the sample visible light image. In one embodiment, the sample visible light image and the first visible light image in step S101 may be obtained by shooting the same preset space by the same visible light imaging device.
S202: and performing feature extraction on the sample visible light image to obtain a feature extraction result.
The embodiment of the present disclosure does not limit feature extraction, for example, the neural network may perform feature extraction layer by layer based on the feature pyramid. In one embodiment, please refer to fig. 5, which illustrates a flowchart of a feature extraction method according to an embodiment of the present disclosure. The feature extraction includes:
s1, performing initial feature extraction on the sample visible light image to obtain a first feature map.
The embodiment of the present disclosure does not limit a specific method for extracting the initial feature, and for example, at least one stage of convolution processing may be performed on the image to obtain the first feature map. In the process of performing convolution processing, a plurality of image feature extraction results of different scales can be obtained, and the first feature map can be obtained by fusing the image feature extraction results of at least two different scales.
And S2, performing composite feature extraction on the first feature graph to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction.
In an embodiment, the performing the composite feature extraction on the first feature map to obtain the first feature information may include: and carrying out image feature extraction on the first feature map to obtain a first extraction result. And extracting channel information of the first characteristic diagram to obtain a second extraction result. And fusing the first extraction result and the second extraction result to obtain the first characteristic information. The embodiment of the present disclosure does not limit the method for extracting the image feature of the first feature map, and for example, the method may perform at least one stage of convolution processing on the first feature map to obtain the first extraction result. The channel information extraction in the embodiment of the present disclosure may focus on mining of the relationship between the respective channels in the first feature map. Illustratively, it may be implemented based on fusing features of multiple channels. In the embodiment of the present disclosure, the composite feature extraction may be performed by fusing the first extraction result and the second extraction result, so that not only the low-order information of the first feature map itself is retained, but also the high-order inter-channel information may be sufficiently extracted, and the information abundance and the expression of the mined first feature information are improved. In the process of implementing the composite feature extraction, at least one fusion method may be used, the fusion method is not limited by the embodiment of the disclosure, and at least one of dimensionality reduction, addition, multiplication, inner product, convolution and averaging and a combination thereof may be used for fusion.
And S3, filtering the first characteristic diagram based on the remarkable characteristics in the first characteristic information.
In the embodiment of the present disclosure, a more significant region and a less significant region in the first feature map may be determined according to the first feature information, and information in the more significant region is filtered out to obtain a filtering result. The embodiment of the present disclosure does not limit the method for judging the salient features, and may be limited based on a neural network or based on expert experience.
And S4, extracting second characteristic information in the filtering result.
Specifically, the significant features in the filtering result can be suppressed to obtain a second feature map; the above-mentioned significant characteristic that suppresses in the above-mentioned filtering result obtains the second characteristic map, including: and performing feature extraction on the filtering result to obtain target features, performing composite feature extraction on the target features to obtain target feature information, and filtering the target features based on the significant features in the target feature information to obtain the second feature map. And under the condition that a preset stopping condition is not reached, updating the filtering result according to the second feature map, and repeating the step of inhibiting the remarkable features in the filtering result to obtain a second feature map. When the stop condition is reached, the second feature information is set to every acquired target feature information.
And S5, fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
Based on the configuration, the significant features can be filtered layer by layer based on the hierarchical structure, composite feature extraction including channel information extraction is performed based on the filtering result, second feature information including a plurality of target feature information is obtained, information with discrimination is mined layer by layer, the effectiveness and discrimination of the second feature information are improved, and the richness of the information in the final feature extraction result is further improved. The feature extraction method in the embodiment of the disclosure can be used for feature extraction of the sample visible light image, and can be used in the embodiments of the disclosure under the condition that a neural network needs to be trained based on the sample visible light image.
S203: and predicting the respiratory region according to the feature extraction result to obtain a respiratory region prediction result.
Steps S202 to S203 are implemented based on the above Neural Network, and specifically, the Neural Network may be one of a Convolutional Neural Network (CNN), a Region-based Convolutional Neural Network (R-CNN), a Fast Region-based Convolutional Neural Network (Fast R-CNN), a Faster Region-based Convolutional Neural Network (Fast R-CNN), or a variant thereof.
S204: and training the neural network according to the respiratory region prediction result and the label.
In one embodiment, the neural network may be feedback-trained using a gradient descent method or a random gradient descent method, so that the trained neural network has the capability of directly and accurately determining the breathing region in the image.
In another embodiment, the breathing zone is a mask zone, and the neural network comprises a first neural network and a second neural network; the extracting the first region in the first visible light image includes: extracting a face target in the first visible light image based on a first neural network; and extracting a breathing region in the human face target based on a second neural network, wherein the breathing region points to the first region. The inventive concept of the training method of the first neural network and the second neural network can refer to the foregoing, and is not described herein in detail. Based on the configuration, the mask area can be determined on the basis of determining the face, and subsequent respiratory frequency analysis of the mask not worn on the face is avoided.
S103: and acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area of which the temperature changes periodically along with the breathing of the target object.
In some scenarios, the breathing of the target subject may cause a change in the actual temperature of a critical area associated with the actual breathing zone. For example, if the target object is in a left-side sleeping posture, the mouth and the nose inhale air flow from the left lower side during inhalation, and exhale air flow to the left lower side during exhalation, the key area is located at the left lower side of the actual breathing area. If the target object presents a right side sleeping posture, the mouth and the nose inhale air flow from the lower right side during inspiration, the air flow is exhaled towards the lower right side during exhalation, and the key area is located at the lower right side of the actual breathing area.
The present disclosure does not limit the method for obtaining the correspondence between the actual breathing zone and the key zone, and it may be obtained empirically. In an embodiment, please refer to fig. 6, which shows a flowchart of a method for obtaining a target mapping relationship according to an embodiment of the present disclosure, including:
s1031: the method comprises the steps of obtaining scene mapping information and mapping relation management information, wherein the scene mapping information represents the corresponding relation between scene characteristic information and scene types, and the mapping relation management information represents the corresponding relation between the scene types and the mapping relation.
In some embodiments, for each scene category, feature information extraction may be performed on a plurality of visible light images corresponding to the scene category, and scene feature information of the scene category is determined according to a feature information extraction result. The embodiment of the present disclosure does not limit the specific manner of determining the scene feature information of the scene category according to the feature information extraction result, for example, clustering may be further performed according to the feature information extraction result, and the feature information corresponding to the clustering center is determined as the scene feature information of the scene category. Or randomly selecting a plurality of feature extraction results, and determining the average value of each feature extraction result as the scene feature information of the scene category.
The setting method of the scene type is not limited in the embodiment of the present disclosure. For example, in one embodiment, various typical scenes may be classified hierarchically, for example, the major classes are sleep scenes, activity scenes, sedentary scenes, and the like, and the minor classes represent specific postures of the target object in each major class of scenes, for example, in the sleep scenes, whether the user sleeps left or right or lies on back. For each scene, its corresponding mapping relationship may be determined.
In the embodiment of the present disclosure, the scene mapping information and the mapping relationship management information may be set according to an actual situation, or may be modified according to the actual situation, so that the scheme in the embodiment of the present disclosure may perform adaptive updating along with the expansion of the scene, so as to fully meet the requirement of accurately determining the key area in various scenes.
S1032: and determining the target scene characteristic information corresponding to the target object.
In the embodiment of the present disclosure, the target scene feature information may be extracted from at least one visible light image in which the target object is located, an image used for extracting the target scene feature information is referred to as a target visible light image, and the target visible light image may be the first visible light image or the second visible light image. Please refer to fig. 7, which shows a flowchart of a method for determining target scene characteristic information corresponding to a target object, including:
s10321: and acquiring a target visible light image comprising the target object.
Please refer to the foregoing, the target visible-light object may be the first visible-light image or the second visible-light image.
S10322: and performing multi-scale feature extraction on the target visible light image to obtain feature extraction results of a plurality of levels.
The embodiment of the disclosure can extract the target scene feature information based on a feature extraction network. Referring to fig. 8, a schematic diagram of a feature extraction network according to an embodiment of the disclosure is shown. The feature extraction network can be expanded to form a standard convolution network through top-down channels and transverse connection, so that rich and multi-scale feature extraction results can be effectively extracted from a target visible light image with single resolution. Wherein, the feature extraction network is only illustrated by 3 layers, and in practical application, the feature extraction network may comprise 4 layers or more. The down-sampling network layer in the feature extraction network may output the feature extraction result of each scale, where the down-sampling network layer is actually a generic term of the related network layer that implements the feature aggregation function, and specifically, the down-sampling network layer may be a maximum pooling layer, an average pooling layer, and the like.
S10323: and fusing the feature extraction results according to the ascending order of the hierarchy to obtain feature fusion results of a plurality of hierarchies.
In the embodiment of the present disclosure, the feature extraction results extracted from different layers of the feature extraction network have different scales, and the feature extraction results may be fused according to the ascending order of the hierarchy to obtain feature fusion results of multiple hierarchies. Taking fig. 8 as an example, the feature extraction network may include three feature extraction layers, and sequentially output feature extraction results a1, B1, and C1 in ascending order of hierarchy. The embodiment of the present disclosure does not limit the expression manner of the feature extraction results, and the feature extraction results a1, B1, and C1 may be characterized by a feature map, a feature matrix, or a feature vector. The feature extraction results A1, B1 and C1 can be sequentially fused to obtain feature fusion results of multiple levels. For example, the feature extraction result a1 may be used to perform inter-channel information fusion to obtain a feature fusion result a 2. The feature extraction result a1 and the feature extraction result B1 can be fused to obtain a feature fusion result B2. The feature extraction result a1, the feature extraction result B1, and the feature extraction result C1 may be fused to obtain a feature fusion result C2. The disclosed embodiments do not limit the specific fusion method, and at least one of dimension reduction, addition, multiplication, inner product, convolution and combinations thereof may be used to perform the above fusion.
S10324: and fusing the feature fusion results according to the descending order of the hierarchy to obtain the feature information of the target scene.
For example, the above-obtained feature fusion results C2, B2, and a2 may be sequentially fused to obtain scene feature information (target scene feature information). The fusion method used in the fusion process may be the same as or different from the previous step, and this is not limited in this disclosure. Based on the configuration, the target scene feature information not only contains richer feature information, but also contains sufficient context information in a bidirectional fusion mode.
S1033: and obtaining a target scene type corresponding to the target scene characteristic information according to the target scene characteristic information and the scene mapping information.
In some embodiments, the scene category corresponding to the scene feature information closest to the target scene feature information may be determined as the target scene category. The target scene feature information may be obtained based on the neural network in steps S1032-S1033, so as to automatically determine the target scene category. In other possible embodiments, the target scene category may also be obtained directly by receiving user input.
S1034: and obtaining the target mapping relation according to the target scene type and the mapping management information.
In the embodiment, the target mapping relation can be obtained through automatic adaptation for different scenes, so that the breathing detection area can be accurately determined in various scenes, the accuracy of the breathing detection area is improved, and the detection accuracy of the breathing frequency is further ensured.
S104, according to the mapping relation between the first area and the target, a second area is determined in the first visible light image, and the second area points to the key area.
In one embodiment, the target mapping relationship includes direction mapping information, the direction mapping information represents a direction of the key region relative to the actual breathing region, and the second region may be determined according to the direction mapping information and the first region. Further, in some embodiments, the target mapping relationship may further include distance mapping information, the distance mapping information represents a distance of the key region relative to the actual breathing region, and the second region may be determined further according to the direction mapping information, the distance mapping information, and the first region. The direction mapping information and the area mapping information are not limited in the embodiments of the present disclosure, and for example, in the case where the target object lies on the left side, the distance mapping information may be set to 0.2 to 0.5 meters. Based on the configuration, the second area can be obtained according to the first area pointing to the actual breathing area, the temperature change of the second area can reflect the breathing condition of the target object, and the accurate positioning of the second area improves the positioning accuracy of the breathing detection area.
In one possible embodiment, preset shape information may also be obtained, where the shape information includes region size information and/or region shape information. For example, the shape to be provided in the second region and the area of the second region may be set in advance. For example, the region shape information may be set to a rectangular shape or a circular shape, and the region size information may be set to 3 to 5 square centimeters. Based on this setting, the second region is determined such that the outer shape of the second region matches the outer shape information, the direction of the center of the second region with respect to the center of the first region matches the direction map information, and the distance of the center of the second region with respect to the center of the first region matches the distance map information. The embodiment of the present disclosure is not limited to the setting method of the shape information, and may be set empirically. Based on this configuration, the determination result of the second region can be made more accurate.
And S105, determining a respiration detection area in the first thermal image according to the second area.
In an embodiment of the present disclosure, a matching relationship between the first visible light image and the first thermal image may be expressed by a homography matrix, that is, the homography matrix represents a corresponding relationship between pixel points of the first visible light image and pixel points of the first thermal image. This homography matrix can be determined after registration of the visible light imaging device and the thermal imaging device.
In one embodiment, the second region may be mapped into the first thermal image based on the homography matrix to obtain a respiration detection region.
In another embodiment, please refer to fig. 9, which illustrates a flowchart of a method for determining a respiration detection region in a first thermal image according to an embodiment of the present disclosure. The method comprises the following steps:
s1051, according to the homography matrix, determining a related area matched with the second area in the first thermal image.
In the embodiment of the present disclosure, the second region may be directly mapped to the first thermal image based on the homography matrix, so as to obtain a related region, which is obviously the same size and shape as the second region.
And S1052, dividing the related area to obtain at least two candidate areas.
In order to facilitate locating an area with the most obvious temperature change in the associated area, the associated area may be divided to obtain at least two candidate areas.
And S1053, determining the candidate area with the highest temperature change degree as the respiration detection area.
In the embodiment of the disclosure, the candidate region with the highest temperature change degree is selected, so that the respiration detection region is more accurate, and the detection result is interfered by less noise and is more accurate by detecting the respiration frequency based on the respiration detection region.
In the embodiment of the present disclosure, a preset time interval may be determined first, and for each candidate region, a maximum temperature and a minimum temperature of the candidate region in the preset time interval are obtained; and obtaining the temperature change degree of the candidate area according to the difference value of the highest temperature and the lowest temperature. For example, a plurality of thermal images captured of the target object within the preset time interval may be selected from the second video stream, a minimum temperature and a maximum temperature reached by the candidate area are determined from the plurality of thermal images, and the difference is determined as a temperature change degree of the candidate area. Based on the configuration, the obvious degree of temperature change of the candidate region can be accurately evaluated, the candidate region with obvious temperature change can be selected, and therefore the positioning accuracy of the respiration detection region is further improved.
The method for determining the breathing detection area can accurately determine the area which can be used for detecting the breathing frequency in the thermal image, and can further detect the breathing frequency of the target object by analyzing the temperature of the area.
Further, in the embodiment of the present disclosure, first temperature information corresponding to the respiration detection area may be extracted from the first thermal image, and the first temperature information represents temperature information corresponding to the key area at a first time.
Specifically, temperature information corresponding to a relevant pixel point in the respiration detection area may be determined; and calculating the first temperature information according to the temperature information corresponding to each related pixel point. By determining the first temperature information and combining with other temperature information, the breathing frequency of the target object can be determined under the non-contact condition.
The embodiment of the present disclosure does not limit the related pixel points. For example, each pixel in the breath detection region may be the associated pixel. In an embodiment, the pixel point filtering may be further performed based on temperature information of each pixel point in the respiration detection area, the pixel points whose temperature information does not meet the preset temperature requirement are filtered, and the pixel points that are not filtered are determined as the relevant pixel points. The embodiment of the present disclosure does not limit the preset temperature requirement, for example, an upper temperature limit, a lower temperature limit, or a temperature interval may be defined.
The embodiments of the present disclosure do not limit a specific method of calculating the first temperature information. For example, the average value or the weighted average value of the temperature information corresponding to each relevant pixel point may be determined as the first temperature information, and the weight is not limited in the embodiment of the present disclosure, and may be set by a user according to actual requirements. In one embodiment, the weight may be inversely related to the distance of the corresponding related pixel point from the center position of the respiration detection region. Illustratively, if the distance between the relevant pixel point and the center position of the respiration detection area is close, the weight is higher, and if the distance between the relevant pixel point and the center position of the respiration detection area is farther, the weight is lower.
In one embodiment, the method of further detecting respiratory rate comprises:
s301, at least one piece of second temperature information is obtained, and the second temperature information represents temperature information corresponding to the key area at a second moment different from the first moment.
Please refer to the foregoing, the manner of acquiring the second temperature information and the manner of acquiring the first temperature information in the embodiments of the present disclosure are based on the same inventive concept, and are not described herein again. A second visible light image and a second thermal image matching the second visible light object may determine corresponding second temperature information, and different second temperature information may represent temperature information corresponding to the key area at a different second time.
Specifically, taking the example of acquiring a certain second temperature information, a corresponding second visible light image and a thermal image matched with the second visible light image may be acquired, where the second visible light image includes the target object at the second time. And extracting a third region in the second visible light image, wherein the third region points to the actual breathing region. And determining a fourth area in the second visible light image according to the third area and the mapping relation, wherein the fourth area points to the key area. And extracting temperature information from the respiration detection area determined based on the fourth area based on the second thermal image to obtain the second temperature information.
S302, determining the breathing frequency of the target object according to the first temperature information and the at least one piece of second temperature information.
The breathing of the target object is considered to lead the temperature of the key area to show a periodic change rule, when the target object inhales, the temperature of the key area is reduced, when the target object exhales, the temperature of the key area is increased, the change trends of the first temperature information and the at least one second temperature information reflect the periodic change rule of the temperature of the key area, and the breathing frequency of the target object can be determined by analyzing the first temperature information and the at least one second temperature information.
Please refer to fig. 10, which shows a flowchart of a respiratory rate determining method according to an embodiment of the present disclosure, including:
s3021, arranging the first temperature information and the at least one second temperature information according to a time sequence to obtain a temperature sequence.
For each target object, a temperature sequence may be obtained. Of course, if the thermal imaging device and the visible light imaging device capture multiple objects simultaneously, a corresponding temperature sequence can be obtained for each of the multiple objects based on the above method, so that the breathing frequency of each object can be finally determined. The embodiment of the present disclosure takes a single subject as an example, and details of a respiratory rate detection method are performed.
And S3022, performing noise reduction on the temperature sequence to obtain a target temperature sequence.
In the embodiment of the disclosure, a noise reduction strategy and a noise reduction mode can be determined; and processing the temperature sequence based on the noise reduction mode according to the noise reduction processing strategy to obtain the target temperature sequence.
The embodiments of the present disclosure do not limit the specific content of the above noise reduction strategy and noise reduction method. Exemplary above noise reduction processing strategies include at least one of: denoising based on a high-frequency threshold, denoising based on a low-frequency threshold, filtering random noise and posteriori denoising. Illustratively, the noise reduction processing is implemented based on at least one of the following modes: independent component analysis, laplacian pyramid, band-pass filtering, wavelets, hamming windows.
Taking the posterior denoising as an example, the respiratory frequency verification condition and the denoising experience parameter corresponding to the posterior denoising can be set, and the denoising is performed on the temperature sequence according to the denoising experience parameter to obtain the target temperature sequence. And determining the breathing frequency of the target object according to the target temperature sequence, verifying the determined breathing frequency of the target object based on the breathing frequency verification condition, if the verification is passed, determining that the noise reduction effect of the noise reduction empirical parameter is acceptable, and when the step S3022 is executed again, directly performing noise reduction based on the noise reduction empirical parameter. The embodiment of the present disclosure does not limit the determination method of the noise reduction experience parameter, and may be obtained according to expert experience.
S3023, determining the breathing frequency of the target object based on the target temperature sequence.
By determining the temperature sequence and carrying out noise reduction processing on the temperature sequence, the noise influencing the respiratory frequency calculation can be filtered out, so that the obtained respiratory frequency is more accurate. Specifically, the method for determining the breathing frequency of the target object based on the target temperature sequence specifically includes:
s30231, determining each key point in the target temperature sequence, wherein the key points are peak points or valley points.
S30232, for any two adjacent key points, determining the time interval between the two adjacent key points.
For the extracted N keypoints, every two adjacent keypoints can calculate a corresponding time interval, and then N-1 time intervals can be determined.
S30233, determining the breathing frequency according to the time interval.
The disclosed embodiments do not limit the specific method of determining the above-described breathing frequency according to the time interval. For example, for the N-1 time intervals, the inverse of one of the N-1 time intervals may be determined as the breathing frequency, or the breathing frequency may be determined based on some or all of the N time intervals, or, for example, the inverse of the average of some or all of the N time intervals may be determined as the breathing frequency. The disclosed embodiments can accurately determine the breathing frequency by calculating the time interval between adjacent key points.
The embodiments of the present disclosure may detect one or more target objects in real time as long as the target objects are located in the above-mentioned preset space. The breathing frequency can be determined by performing visible light photographing and hot photographing on the target object without contacting the target object, and the method can be widely applied to various scenes. For example, in monitoring a hospital ward, a patient can monitor the breathing rate of the patient without wearing any equipment, so that the discomfort of patient monitoring is reduced, and the quality, effect and efficiency of patient monitoring are improved. In a closed scene, such as an office and an office building hall, the breathing rate of persons in the field is detected, and whether abnormality exists is judged. Under the infant nursing scene, the infant breathing is detected, the phenomenon that the infant is suffocated due to the fact that food blocks the respiratory tract is avoided, and the infant breathing rate is analyzed in real time to judge the health state of the infant. Under the scene that the infection risk is high, the target object that probably becomes the infection source is shot to remote control thermal imaging equipment and visible light camera equipment, monitors target object vital sign when avoiding infecting.
According to the embodiment of the disclosure, the visible light image and the thermal image which are obtained by shooting the target object and have the matching relation are analyzed, the respiratory frequency detection result is obtained under the condition of not contacting the target object, the non-contact detection is realized, the blank of a non-contact detection scene is filled, and the non-contact detection method and the system have good detection speed and detection accuracy.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing of the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
Fig. 11 shows a block diagram of a breath detection region determination apparatus according to an embodiment of the present disclosure. As shown in fig. 11, the above apparatus includes:
the image acquiring module 10 is configured to acquire a first visible light image and a first thermal image matched with the first visible light image, where the first visible light image includes a target object.
A first region extracting module 20, configured to extract a first region in the first visible light image, where the first region points to an actual breathing region of the target object.
The mapping determining module 30 is configured to obtain a target mapping relationship, where the target mapping relationship represents a corresponding relationship between the actual breathing region and a key region, and the key region represents an actual physical region where temperature changes periodically along with breathing of the target subject.
A second region extracting module 40, configured to determine a second region in the first visible light image according to the mapping relationship between the first region and the target, where the second region points to the key region.
A respiration detection region determination module 50 for determining a respiration detection region in said first thermal image based on said second region.
In some possible embodiments, the mapping determining module includes: the mapping information determining unit is used for acquiring scene mapping information and mapping relationship management information, wherein the scene mapping information represents the corresponding relationship between the scene characteristic information and the scene category, and the mapping relationship management information represents the corresponding relationship between the scene category and the mapping relationship; a target scene characteristic information determining unit, configured to determine target scene characteristic information corresponding to the target object; a target scene type determining module, configured to obtain a target scene type corresponding to the target scene characteristic information according to the target scene characteristic information and the scene mapping information; and the target mapping relation determining module is used for obtaining the target mapping relation according to the target scene type and the mapping management information.
In some possible embodiments, the target scene characteristic information determining unit is configured to acquire a target visible light image including the target object; performing multi-scale feature extraction on the target visible light image to obtain feature extraction results of a plurality of levels; fusing the feature extraction results according to the hierarchy increasing sequence to obtain feature fusion results of a plurality of hierarchies; and fusing the feature fusion results according to the descending order of the hierarchy to obtain the feature information of the target scene.
In some possible embodiments, the target mapping relationship includes direction mapping information, the direction mapping information represents a direction of the key region relative to the actual breathing region, and the second region extracting module is configured to determine the second region according to the direction mapping information and the first region.
In some possible embodiments, the target mapping relationship further includes distance mapping information, the distance mapping information represents a distance between the key region and the actual breathing region, and the second region extracting module is configured to determine the second region according to the direction mapping information, the distance mapping information, and the first region.
In some possible embodiments, the second region extracting module is further configured to obtain preset shape information, where the shape information includes region size information and/or region shape information; the second area is determined such that the outer shape of the second area corresponds to the outer shape information, the direction of the center of the second area with respect to the center of the first area corresponds to the direction mapping information, and the distance of the center of the second area with respect to the center of the first area corresponds to the distance mapping information.
In some possible embodiments, the respiration detection region determining module is configured to obtain a homography matrix, where the homography matrix represents a correspondence between pixel points of the first visible light image and pixel points of the first thermal image; and determining the respiration detection area according to the homography matrix and the second area.
In some possible embodiments, the respiration detection region determining module is further configured to determine a correlation region in the first thermal image that matches the second region according to the homography matrix; dividing the associated area to obtain at least two candidate areas; and determining the candidate area with the highest temperature change degree as the respiration detection area.
In some possible embodiments, the respiration detection region determining module is further configured to determine a highest temperature and a lowest temperature of the candidate region within a preset time interval; and obtaining the temperature change degree of the candidate area according to the difference value of the highest temperature and the lowest temperature.
In some possible embodiments, the first region extracting module is configured to perform respiratory region extraction on the first visible light image based on a neural network to obtain the first region; the device also comprises a neural network training module, a data processing module and a data processing module, wherein the neural network training module is used for acquiring the sample visible light image and the label corresponding to the sample visible light image; the label points to a breathing region in the sample visible light image; the breathing area is the mouth-nose area or the mask area of the sample target object in the sample visible light image; predicting a breathing area of the sample visible light image based on the neural network to obtain a prediction result of the breathing area; and training the neural network according to the respiratory region prediction result and the label.
In some possible embodiments, the neural network training module is configured to perform feature extraction on the sample visible light image to obtain a feature extraction result; predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result; the neural network training module is further used for performing initial feature extraction on the sample visible light image to obtain a first feature map; performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction; filtering the first feature map based on salient features in the first feature information; extracting second characteristic information in the filtering result; and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
In some possible embodiments, the apparatus further includes a temperature information determining module, configured to extract first temperature information corresponding to the breath detection area from the first thermal image, where the first temperature information represents temperature information corresponding to the key area at a first time.
In some possible embodiments, the temperature information determining module is further configured to determine temperature information corresponding to a relevant pixel point in the breath detection area; and calculating the first temperature information according to the temperature information corresponding to each related pixel point.
In some possible embodiments, the apparatus further includes a breathing frequency determining module, configured to obtain at least one second temperature information, where the second temperature information represents temperature information corresponding to the critical area at a second time different from the first time; and determining the breathing frequency of the target object according to the first temperature information and the at least one piece of second temperature information.
In some possible embodiments, the breathing frequency determining module is configured to arrange the first temperature information and the at least one second temperature information according to a time sequence to obtain a temperature sequence; carrying out noise reduction processing on the temperature sequence to obtain a target temperature sequence; and determining the breathing frequency of the target object based on the target temperature sequence.
In some possible embodiments, the breathing frequency determining module is configured to determine each key point in the target temperature sequence, where the key points are peak points or valley points; for any two adjacent key points, determining the time interval between the two adjacent key points; and determining the breathing frequency according to the time interval.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The embodiment of the present disclosure also provides a computer-readable storage medium, where at least one instruction or at least one program is stored in the computer-readable storage medium, and the at least one instruction or the at least one program is loaded by a processor and executed to implement the method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the method.
The electronic device may be provided as a terminal, server, or other form of device.
FIG. 12 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 12, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the above-mentioned communication component 816 further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 13 shows a block diagram of another electronic device in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 13, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (19)

1. A method of breath detection region determination, the method comprising:
acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object;
extracting a first region in the first visible light image, the first region being directed to an actual breathing region of the target subject;
acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area of which the temperature changes periodically along with the breathing of the target object;
determining a second region in the first visible light image according to the mapping relation between the first region and the target, wherein the second region points to the key region;
a respiration detection region is determined in the first thermal image from the second region.
2. The method of claim 1, wherein the obtaining the target mapping relationship comprises:
acquiring scene mapping information and mapping relation management information, wherein the scene mapping information represents the corresponding relation between the scene characteristic information and the scene category, and the mapping relation management information represents the corresponding relation between the scene category and the mapping relation;
determining target scene characteristic information corresponding to the target object;
obtaining a target scene category corresponding to the target scene characteristic information according to the target scene characteristic information and the scene mapping information;
and obtaining the target mapping relation according to the target scene category and the mapping management information.
3. The method according to claim 2, wherein the determining the target scene characteristic information corresponding to the target object includes:
acquiring a target visible light image including the target object;
performing multi-scale feature extraction on the target visible light image to obtain feature extraction results of multiple levels;
fusing the feature extraction results according to the hierarchy increasing sequence to obtain feature fusion results of a plurality of hierarchies;
and fusing the feature fusion results according to the descending order of the levels to obtain the feature information of the target scene.
4. The method according to any one of claims 1 to 3, wherein the target mapping includes direction mapping information characterizing a direction of the key region relative to the actual breathing region, and wherein determining a second region in the first visible light image according to the first region and the target mapping includes:
and determining the second area according to the direction mapping information and the first area.
5. The method of claim 4, wherein the target mapping further includes distance mapping information characterizing distances of the critical regions relative to the actual breathing region, and wherein determining the second region based on the direction mapping information and the first region comprises:
and determining the second area according to the direction mapping information, the distance mapping information and the first area.
6. The method of claim 5, wherein determining the second region based on the direction mapping information, the distance mapping information, and the first region comprises:
acquiring preset appearance information, wherein the appearance information comprises area size information and/or area shape information;
determining the second area such that an outer shape of the second area conforms to the outer shape information, and a direction of a center of the second area with respect to a center of the first area conforms to the direction mapping information, and a distance of the center of the second area with respect to the center of the first area conforms to the distance mapping information.
7. The method of any one of claims 1 to 6, wherein said determining a respiration detection region in said first thermal image from said second region comprises:
acquiring a homography matrix representing a corresponding relation between pixel points of the first visible light image and pixel points of the first thermal image;
and determining the respiration detection area according to the homography matrix and the second area.
8. The method of claim 7, wherein determining the breath detection region from the homography matrix and the second region comprises:
determining, in the first thermal image, a region of interest that matches the second region according to the homography matrix;
dividing the associated region to obtain at least two candidate regions;
and determining the candidate area with the highest temperature change degree as the respiration detection area.
9. The method of claim 8, further comprising:
determining the highest temperature and the lowest temperature of the candidate area within a preset time interval;
and obtaining the temperature change degree of the candidate area according to the difference value of the highest temperature and the lowest temperature.
10. The method according to any one of claims 1 to 9, wherein said extracting a first region in the first visible light image comprises: based on a neural network, extracting a breathing region of the first visible light image to obtain a first region; the neural network is obtained based on the following method:
acquiring a sample visible light image and a label corresponding to the sample visible light image; the label points to a breathing region in the sample visible light image; the breathing area is an oronasal area or a mask area of a sample target object in the sample visible light image;
predicting a breathing area of the sample visible light image based on the neural network to obtain a prediction result of the breathing area;
and training the neural network according to the respiratory region prediction result and the label.
11. The method of claim 10, wherein the predicting the breathing region of the sample visible light image based on the neural network to obtain a prediction result of the breathing region comprises:
performing feature extraction on the sample visible light image to obtain a feature extraction result;
predicting a respiratory region according to the feature extraction result to obtain a respiratory region prediction result;
wherein, the performing the feature extraction on the sample visible light image to obtain a feature extraction result comprises:
performing initial feature extraction on the sample visible light image to obtain a first feature map;
performing composite feature extraction on the first feature map to obtain first feature information, wherein the composite feature extraction comprises channel feature extraction;
filtering the first feature map based on salient features in the first feature information;
extracting second characteristic information in the filtering result;
and fusing the first characteristic information and the second characteristic information to obtain the characteristic extraction result.
12. The method according to any one of claims 1 to 11, further comprising: extracting first temperature information corresponding to the respiration detection area from the first thermal image, wherein the first temperature information represents temperature information corresponding to the key area at a first moment.
13. The method of claim 12, wherein said extracting first temperature information corresponding to the breath detection region in the first thermal image comprises:
determining temperature information corresponding to a related pixel point in the respiration detection area;
and calculating the first temperature information according to the temperature information corresponding to each related pixel point.
14. The method according to claim 12 or 13, characterized in that the method further comprises:
acquiring at least one piece of second temperature information, wherein the second temperature information represents temperature information corresponding to the key area at a second moment different from the first moment;
determining a breathing frequency of the target object according to the first temperature information and the at least one second temperature information.
15. The method of claim 14, wherein determining the breathing frequency of the target subject from the first temperature information and the at least one second temperature information comprises:
arranging the first temperature information and the at least one second temperature information according to a time sequence to obtain a temperature sequence;
denoising the temperature sequence to obtain a target temperature sequence;
determining a breathing frequency of the target subject based on the target temperature sequence.
16. The method of claim 15, wherein determining the target subject's breathing frequency based on the target temperature sequence comprises:
determining each key point in the target temperature sequence, wherein the key points are all peak points or all valley points;
for any two adjacent key points, determining the time interval between the two adjacent key points;
and determining the respiratory frequency according to the time interval.
17. A breath detection region determination apparatus, comprising:
the image acquisition module is used for acquiring a first visible light image and a first thermal image matched with the first visible light image, wherein the first visible light image comprises a target object;
a first region extraction module, configured to extract a first region in the first visible light image, where the first region is directed to an actual breathing region of the target subject;
the mapping determination module is used for acquiring a target mapping relation, wherein the target mapping relation represents the corresponding relation between the actual breathing area and a key area, and the key area represents an actual physical area with temperature changing periodically along with the breathing of the target object;
a second region extraction module, configured to determine a second region in the first visible light image according to the mapping relationship between the first region and the target, where the second region points to the key region;
a respiration detection region determination module to determine a respiration detection region in the first thermal image from the second region.
18. A computer-readable storage medium, having at least one instruction or at least one program stored thereon, which is loaded and executed by a processor to implement the respiration detection region determination method according to any one of claims 1 to 16.
19. An electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the breath detection region determination method of any one of claims 1-16 by executing the instructions stored by the memory.
CN202110870587.1A 2021-07-30 2021-07-30 Respiration detection area determination method and device, storage medium and electronic equipment Withdrawn CN113591701A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110870587.1A CN113591701A (en) 2021-07-30 2021-07-30 Respiration detection area determination method and device, storage medium and electronic equipment
PCT/CN2022/098521 WO2023005469A1 (en) 2021-07-30 2022-06-14 Method and apparatus for determining respiration detection region, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110870587.1A CN113591701A (en) 2021-07-30 2021-07-30 Respiration detection area determination method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113591701A true CN113591701A (en) 2021-11-02

Family

ID=78252457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110870587.1A Withdrawn CN113591701A (en) 2021-07-30 2021-07-30 Respiration detection area determination method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113591701A (en)
WO (1) WO2023005469A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157807A (en) * 2021-11-29 2022-03-08 江苏宏智医疗科技有限公司 Image acquisition method and device and readable storage medium
WO2023005403A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus, and storage medium and electronic device
WO2023005469A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Method and apparatus for determining respiration detection region, storage medium, and electronic device
CN115995282A (en) * 2023-03-23 2023-04-21 山东纬横数据科技有限公司 Expiratory flow data processing system based on knowledge graph
WO2023093407A1 (en) * 2021-11-25 2023-06-01 上海商汤智能科技有限公司 Calibration method and apparatus, and electronic device and computer-readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015030611A1 (en) * 2013-09-02 2015-03-05 Interag Method and apparatus for determining respiratory characteristics of an animal
CN109446981B (en) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 Face living body detection and identity authentication method and device
CN112057074A (en) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 Respiration rate measuring method, respiration rate measuring device, electronic equipment and computer storage medium
CN111898580B (en) * 2020-08-13 2022-12-20 上海交通大学 System, method and equipment for acquiring body temperature and respiration data of people wearing masks
CN113591701A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Respiration detection area determination method and device, storage medium and electronic equipment
CN113592817A (en) * 2021-07-30 2021-11-02 深圳市商汤科技有限公司 Method and device for detecting respiration rate, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023005403A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Respiratory rate detection method and apparatus, and storage medium and electronic device
WO2023005469A1 (en) * 2021-07-30 2023-02-02 上海商汤智能科技有限公司 Method and apparatus for determining respiration detection region, storage medium, and electronic device
WO2023093407A1 (en) * 2021-11-25 2023-06-01 上海商汤智能科技有限公司 Calibration method and apparatus, and electronic device and computer-readable storage medium
CN114157807A (en) * 2021-11-29 2022-03-08 江苏宏智医疗科技有限公司 Image acquisition method and device and readable storage medium
CN115995282A (en) * 2023-03-23 2023-04-21 山东纬横数据科技有限公司 Expiratory flow data processing system based on knowledge graph

Also Published As

Publication number Publication date
WO2023005469A1 (en) 2023-02-02

Similar Documents

Publication Publication Date Title
WO2023005468A1 (en) Respiratory rate measurement method and apparatus, storage medium, and electronic device
WO2023005469A1 (en) Method and apparatus for determining respiration detection region, storage medium, and electronic device
CN111414831B (en) Monitoring method and system, electronic device and storage medium
US10282597B2 (en) Image classification method and device
WO2023005402A1 (en) Respiratory rate detection method and apparatus based on thermal imaging, and electronic device
WO2023005403A1 (en) Respiratory rate detection method and apparatus, and storage medium and electronic device
US10115019B2 (en) Video categorization method and apparatus, and storage medium
CN110956061B (en) Action recognition method and device, and driver state analysis method and device
CN105590094B (en) Determine the method and device of human body quantity
EP3868293B1 (en) System and method for monitoring pathological breathing patterns
CN105357425B (en) Image capturing method and device
CN110287671B (en) Verification method and device, electronic equipment and storage medium
KR20180013208A (en) Apparatus and Method for Processing Differential Beauty Effect
US20160278664A1 (en) Facilitating dynamic and seamless breath testing using user-controlled personal computing devices
CN104408402A (en) Face identification method and apparatus
WO2021047069A1 (en) Face recognition method and electronic terminal device
CN106980840A (en) Shape of face matching process, device and storage medium
CN105227855B (en) A kind of image processing method and terminal
US9977510B1 (en) Gesture-driven introduction system
WO2020088092A1 (en) Key point position determining method and apparatus, and electronic device
CN109325479B (en) Step detection method and device
CN113887474B (en) Respiration rate detection method and device, electronic device and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN111144266A (en) Facial expression recognition method and device
CN109938722B (en) Data acquisition method and device, intelligent wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056532

Country of ref document: HK

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211102