WO2020215961A1 - 面向室内环境控制的人员信息检测方法与*** - Google Patents

面向室内环境控制的人员信息检测方法与*** Download PDF

Info

Publication number
WO2020215961A1
WO2020215961A1 PCT/CN2020/080990 CN2020080990W WO2020215961A1 WO 2020215961 A1 WO2020215961 A1 WO 2020215961A1 CN 2020080990 W CN2020080990 W CN 2020080990W WO 2020215961 A1 WO2020215961 A1 WO 2020215961A1
Authority
WO
WIPO (PCT)
Prior art keywords
visible light
image
registered
information
person
Prior art date
Application number
PCT/CN2020/080990
Other languages
English (en)
French (fr)
Inventor
张文利
王佳琪
郭向
杨堃
Original Assignee
北京工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京工业大学 filed Critical 北京工业大学
Publication of WO2020215961A1 publication Critical patent/WO2020215961A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the invention belongs to the field of target recognition technology and smart building technology, and specifically relates to a method and system for detecting personnel information for indoor environment control.
  • the indoor environment comfort control system is the central system that controls various equipment in the building, and it is also the core of the smart building.
  • the indoor environment comfort control system can be used in indoor environments such as shopping malls, stations, and offices to enhance the comfort of people in the indoor environment and be able to perform work or study activities in a happy mood.
  • the indoor environment comfort control system relies on relevant sensors to collect environmental information, and comprehensively considers the operating efficiency and safety status of the equipment to control the equipment in the most efficient and energy-saving operating state, thereby adjusting the indoor environment to meet most people's environmental requirements.
  • Literature [1] shows that the temperature and humidity of the environment will affect the working status of indoor personnel.
  • a comfortable environment can not only reduce the negative emotions of indoor personnel to a certain extent, but also increase work efficiency by nearly 10%. Therefore, control the indoor It is of great practical significance that environmental parameters such as temperature and humidity are in a comfortable range.
  • the indoor environment control system is mainly composed of an environment perception system, an individual information detection system, and an indoor equipment control system. Among them, the environmental perception system collects various indoor environmental information and uploads it to the server.
  • the individual information detection system collects individual information such as gender and body surface temperature of indoor personnel and uploads it to the server.
  • the indoor equipment control system uses the environmental information and Individual personnel information controls the work operation strategy of indoor equipment. In response to this, researchers at home and abroad have carried out a large number of studies.
  • the environmental control system constructed in the literature [2-3] relies on relevant sensors to collect indoor environmental information such as temperature, humidity, and air flow rate at designated indoor locations, and build an indoor equipment control system Establish a mathematical model according to the thermal comfort equation [4-5], and input the indoor environment information into the mathematical model to obtain the control strategy of the air-conditioning equipment [2,3,6], so as to achieve the goal of adjusting the indoor environment to obtain a more ideal comfort ;
  • Literature [7-10] combines gender, age and body surface temperature information as influencing factors with the traditional thermal comfort equation [4-5] to study the thermal comfort and actual feelings of different genders, ages, and body surface temperatures .
  • the deep neural network judges the gender of the facial area image of the person; but because the visible light image cannot reflect the temperature information through the color and texture information, the temperature information of the person cannot be obtained.
  • measurements can be made continuously in time, and local temperature changes can be detected with high precision. People's temperature information can usually achieve better detection accuracy by using the color information in the infrared thermal image.
  • Tanda [16] and Quesada et al. [17] used infrared thermal imaging cameras to photograph the human body to obtain infrared heat containing the human body. Image and get the person’s surface temperature through it.
  • an invention document titled Thermal Imaging Body Temperature Monitoring Device, System and Method provides a thermal imaging body temperature monitoring device and application system, which can monitor the body temperature of the human body in public mobile places, and Achieve non-contact, fast and unrestricted flow of inspected personnel.
  • the invention uses the video collected by the infrared imager to analyze whether there is movement of people in the monitoring area to locate the position of the floating people, so that people in a static state cannot be detected; because the infrared thermal image lacks the appearance and texture details of the personnel, Person detection based only on infrared thermal images will result in people not being accurately identified, which will affect the detection accuracy of the later body temperature calculation results; the system and method provided by the above invention are only used in scenes with frequent personnel movement and cannot be applied to indoor scenes. And can not identify the person's gender and age information.
  • the technical problem to be solved by the present invention is: the age, gender, and body surface temperature information of a person cannot be obtained at the same time by using infrared thermal image or visible light image fusion alone; due to the field of view of the visible light camera device and the infrared camera device used to collect scene images Angles, resolutions, and angles at the time of shooting are different, causing the pixel points of the obtained visible light image and the infrared thermal image to not correspond, and there is no mapping relationship, which affects the image fusion processing effect and reduces the accuracy of personnel information detection.
  • the technical solution adopted by the present invention is a personnel information detection system for indoor environment control.
  • the system performs registration processing of visible light images and infrared thermal images with different resolution, field of view, and shooting angles, so that the registration The pixel points of the rear visible light image and the infrared thermal image correspond. Then detect the area information, gender and age information of the person in the visible light image after registration, and map the area information of the person in the visible light image after the registration to the infrared thermal image after the registration, and accurately obtain the area information of the person in the infrared thermal image after the registration , Calculate the person's body surface temperature, so as to complete the detection of the person's gender, age and the person's body surface temperature related to indoor environment control.
  • the representative diagram of the present invention is shown in FIG. 1.
  • the system includes: image reading device module 10, image registration module 20, post-registration infrared thermal image reading module 30, post-registration visible light image reading module 40, and personnel information
  • the detection module 50 the personnel area information mapping module 60, the personnel body surface temperature calculation module 70, and the information fusion module 80.
  • the connection relationship between the aforementioned modules is as follows: the image reading device module 10 outputs the infrared thermal image and the visible light image to the image registration module 20, and the image registration module 20 aligns the registered infrared thermal image with The registered visible light images are respectively output to the post-registered infrared thermal image reading module 30 and the post-registered visible light image reading module 40, and the post-registered infrared thermal image reading module 30 outputs the registered infrared
  • the thermal image is output to the personnel area information mapping module 60
  • the registered visible light image reading module 40 outputs the registered visible light image to the personnel information detection module 50
  • the personnel information detection module 50 will register
  • the information of the head or the whole body area, gender, and age of the person’s head or the person’s whole body in the post-visible light image and the registered visible light image are output to the person area information mapping module 60
  • the person area information mapping module 60 outputs the registered visible light image and the registration The information of the head or the whole body area of the person in the
  • the information fusion module 80 fuses the information of the head or the whole body area, gender and age of the person in the visible light image after the registration with the person's body surface temperature information in the infrared thermal image after the registration and visually displays it In the visible light image or the infrared thermal image, the detection of the gender, age and body surface temperature of the person related to indoor environment control is realized.
  • Image reading device module 10 The image reading device module 10 is composed of an infrared camera and a visible light camera. The image reading device module 10 can simultaneously photograph and acquire infrared thermal images and visible light images, and combine the infrared thermal images And the visible light image is output to the image registration module 20.
  • Image registration module 20 reads infrared thermal images and visible light images from the image reading device module 10.
  • the visual field angle of the visible light camera is larger than that of the infrared camera, and the imaging range of the visible light image is larger than the infrared thermal image. Therefore, the infrared thermal image is used as the reference image, and the visible light image is used as the image to be registered.
  • the stereo vision imaging principle and the infrared camera and The relationship between the field of view, resolution and imaging size of the visible light camera device realizes accurate registration of visible light images and infrared thermal images.
  • the image size of the registered visible light image and the registered infrared thermal image is the same and the pixels correspond to each other.
  • the registered infrared thermal image and the registered visible light image are output to the registered infrared thermal image reading module 30 and the registered visible light image reading module 40 respectively.
  • Registered infrared thermal image reading module 30 used to read the infrared thermal image registered by the image registration module 20, and output the registered infrared thermal image to the personnel area information mapping module 60.
  • the registered visible light image reading module 40 is used to read the visible light image registered by the image registration module 20 and output the registered visible light image to the personnel information detection module 50.
  • Personnel information detection module 50 Read the registered visible light image from the registered visible light image reading module 40. Input the registered visible light image into a deep learning model such as SSD, Faster R-CNN, YOLO or SPP-net, etc. to obtain the information of the head or the whole body area of the person in the visible light image, and then the obtained head or whole body of the person The region is input into the deep learning network model used to detect the gender and age of the person, and the gender and age information of the person in the visible light image after the registration is recognized.
  • the gender and age detection of personnel can use a variety of neural network models such as CNN convolutional neural network model, VGG convolutional neural network model or mini-Xception small fully convolutional neural network model.
  • the information on the head or the whole body area, gender and age of the person in the registered visible light image and the registered visible light image are output to the person area information mapping module 60.
  • Personnel area information mapping module 60 read from the person information detection module 50 in the registered visible light image and the person’s head or whole body area, gender and age information in the registered visible light image and from the registered infrared thermal image data
  • the reading module 30 reads the infrared thermal image after registration. Since the pixel points of the registered infrared thermal image and the registered visible light image form a corresponding relationship and the sizes of the two images are the same, the personnel area information in the registered visible light image can be directly mapped to the registered infrared thermal image. Accurately obtain the information of the personnel area in the infrared thermal image after registration.
  • Person body surface temperature calculation module 70 Read the registered visible light image, the person’s head or body area information, gender and age information, post-registered infrared heat in the registered visible light image from the person area information mapping module 60 The head or the whole body area of the person in the image and the infrared thermal image after registration. Read the temperature value of each pixel in the infrared thermal image after registration, and calculate the temperature value of all pixels in the person's head or the person's whole body area in the registered infrared thermal image, and select the person's head or person's whole body area The maximum or average value of the internal pixel temperature is taken as the human body surface temperature.
  • the registered visible light image, the person's head or whole body area, gender and age information in the registered visible light image, and the person's body surface temperature information in the registered infrared thermal image are output to the information fusion module 80.
  • Information fusion module 80 Read the registered visible light image, the person’s head or body area, gender and age information in the registered visible light image, and the person in the registered infrared thermal image from the person’s body surface temperature calculation module 70 Body surface temperature information. Fusion of the person’s head or body area, gender and age information in the registered visible light image with the person’s body surface temperature information in the registered infrared thermal image and visually displayed in the visible light image or infrared thermal image to achieve indoor environment control Information detection of gender, age and body surface temperature of personnel.
  • the visible light image and the infrared thermal image are merged, the area, gender and age of the person in the scene are detected based on the visible light image, and the temperature information is obtained from the infrared thermal image to realize the function of simultaneously detecting the gender, age and temperature information of all persons in the scene.
  • the personnel area information in the visible light image is mapped to the registered infrared thermal image, and the personnel area information in the registered infrared thermal image is accurately obtained, and then detected The surface temperature of the person in the infrared thermal image.
  • the personnel information detection method proposed by the present invention can be used in a variety of environmental scenarios (such as conference rooms, shopping malls, security checkpoints, etc.) to detect personnel location, gender, age and temperature information, and no manual measurement and collection scenarios are required.
  • Chinese personnel information saves a lot of labor costs.
  • the present invention performs registration processing on visible light images and infrared thermal images with different resolutions, viewing angles and shooting angles, so that the registered visible light images correspond to the pixels of the infrared thermal images, thereby effectively fusing the colors of the visible light images , Texture information and temperature information of infrared thermal images, accurately detect the gender, age and temperature information of all persons appearing in the scene, with good real-time performance and high detection accuracy.
  • Fig. 1 is a representative diagram of a method and system for detecting personnel information for indoor environment control provided by the present invention.
  • Fig. 2 is a flowchart of a method for detecting personnel information for indoor environment control according to an embodiment of the present invention.
  • Fig. 3 is a flow chart of reading infrared thermal imaging and visible light imaging data according to an embodiment of the present invention.
  • Fig. 4 is a flow chart of the registration processing of a visible light image and an infrared thermal image according to an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of stereo vision imaging provided by an embodiment of the present invention.
  • Fig. 6 is a flow chart of information detection of a person in an image according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a person's gender and age detection according to an embodiment of the present invention.
  • Fig. 8 is a flow chart of personnel area information mapping provided by an embodiment of the present invention.
  • FIG. 9 is a flow chart for calculating the temperature of a person's body surface according to an embodiment of the present invention.
  • FIG. 2 The flowchart of the embodiment of the present invention is shown in Fig. 2 and includes the following steps:
  • Step S10 Read infrared thermal imaging data and visible light imaging data
  • Step S20 registration processing of the visible light image and the infrared thermal image
  • Step S30 Read the infrared thermal image after registration
  • Step S40 Read the registered visible light image
  • Step S50 information detection of the personnel area in the image
  • Step S60 detecting the gender and age of the personnel
  • Step S70 personnel area information mapping
  • Step S80 Calculation of the temperature of the person's body surface
  • Step S90 Information fusion.
  • the infrared thermal image is recorded as IMG IFR
  • the visible light image is recorded as IMG RGB
  • the registered infrared thermal image is recorded as IMG' IFR
  • the registered visible light image is recorded as IMG' RGB .
  • the step S10 of reading infrared thermal imaging and visible light imaging data of the embodiment further includes the following steps, and the implementation steps are shown in FIG. 3:
  • Step S100 Read infrared thermal imaging data and visible light imaging data from the infrared camera device and the visible light camera device, respectively.
  • Step S110 Determine whether the read infrared thermal imaging data and visible light imaging data are of a video type or an image type. If the data to be detected is of the video type, step S120 is performed; if the data to be detected is of the image type, step S130 is performed.
  • Step S120 Perform framing processing on the video data, and convert the video data into image data.
  • Step S130 Output the infrared thermal image IMG IFR and the visible light image IMG RGB to step S20.
  • the registration processing step S20 of the visible light image and the infrared thermal image of the embodiment further includes the following steps, and the implementation steps are shown in Fig. 4:
  • the positions of the infrared camera and the visible light camera are generally vertical or horizontal.
  • the infrared camera device and the visible light camera device are arranged vertically.
  • Step S200 Read the above-mentioned visible light image IMG RGB and the above-mentioned infrared thermal image IMG IFR from the step S10 of reading the infrared thermal imaging data and the visible light imaging data.
  • Step S210 Calculate the parallax d generated by the infrared camera device and the visible light camera device by using the stereo vision imaging principle to locate the position coordinates of the center point of the infrared thermal image corresponding to the visible light image.
  • the infrared camera device and the visible light camera device are closely adjacent, so the parallax d is little affected by the shooting distance.
  • the method of calculating the parallax d is based on the principle of stereo vision imaging.
  • the distance l from the camera device to the scene is determined according to the actual application scenario; then an actual object observation point p is placed at the distance from the camera device l so that point p On the optical axis of the infrared camera, point p passes through the optical center O 1 of the infrared camera and is imaged at the center point x 1 of the imaging surface of the photoreceptor of the infrared camera, and then the point p passes through the optical center O 2 of the visible light camera.
  • the visual field angle of the visible light camera is larger than that of the infrared camera, which causes the imaging range of the visible light image to be larger than that of the infrared thermal image.
  • the infrared thermal image IMG IFR is used as the reference image, and the visible light image IMG RGB is used as the image to be registered.
  • Step S220 Take the coordinates (a, b) of the IMG RGB as the center to extract an area with a size of X*Y pixels, where X*Y is calculated according to the field of view angle of the infrared camera and the visible light camera.
  • the calculation method is as follows ( As shown in 1) and (2), where m*n is the resolution of the visible light camera device, ⁇ 1 and ⁇ 1 are the horizontal and vertical field angles of the infrared camera, respectively, and ⁇ 2 and ⁇ 2 are respectively The horizontal field of view and vertical field of view of the visible light camera, f 1 and f 2 are the focal lengths of the infrared camera and the visible light camera, respectively.
  • Step S230 Adjust the resolution of the X*Y area extracted in step S210 to the same resolution as the infrared thermal image, and the registered visible light image IMG' RGB accurately matches the registered infrared thermal image IMG' IFR to achieve registration
  • the rear visible light image IMG' RGB and the registered infrared thermal image IMG' IFR have the same size and corresponding pixels.
  • Step S240 output the registered infrared thermal image IMG' IFR and the registered visible light image IMG' RGB to the step of reading the registered infrared thermal image S30 and the step of reading the registered visible light image S40, respectively.
  • Step S30 of reading the registered infrared thermal image of the present embodiment read the registered infrared thermal image IMG' IFR from the visible light image and the infrared thermal image registration processing step S20, and the registered infrared thermal image IMG' The IFR is output to the personnel area information mapping step S70.
  • Reading the registered visible light image step S40 of this embodiment Read the registered visible light image IMG' RGB from the visible light image and the infrared thermal image registration processing step S20, and output the registered visible light image IMG' RGB Go to step S50 for detecting the area information of the person in the image.
  • the information detection step S50 of the person area in the image of this embodiment can detect the head or the whole body area of the person in the registered visible light image IMG' RGB through the deep learning model such as SSD, Faster R-CNN, YOLO, SPP-net, etc.
  • This embodiment uses the Faster R-CNN deep learning network model to detect the head region of the person in the registered visible light image IMG' RGB , including the following steps, the implementation steps are shown in Figure 6:
  • Step S500 Adjust the Faster R-CNN network model.
  • the ResNet-50 deep network instead of the VGG-16 network as the skeleton network to extract deeper features.
  • Step S510 Fine-tune and train the Faster R-CNN head detector on the Hollywood Heads large human head data set to improve the accuracy of the detection result.
  • Step S520 According to the collected data set and head characteristics, adaptively adjust the size of the Anchors prior frame in the region proposal network (RPN) algorithm, and train the region proposal network model of Faster R-CNN. Adjust the scale of the Anchors a priori box to 128 and 256, the ratio is 1:1, 1:2 and 2:1, a total of 6 different anchor sizes to adapt to the head detection in the collected images and reduce redundant calculations.
  • RPN region proposal network
  • Step S530 Read the registered visible light image IMG' RGB from the read registered visible light image Step S40 and input it into the trained Faster R-CNN head detector to obtain the above registered visible light image
  • n 1,2...N ⁇ , where N is the number of detected persons, the visible light image after the above registration
  • the IMG' RGB image coordinate system takes the upper left corner of the image as the origin
  • (x n ,y n ) is the starting point coordinates of the visible light image IMG' RGB image coordinate system after the registration of the nth person’s head area
  • h n and w n is the height and width of this area
  • (x n ,y n ,h n ,w n ) represents a rectangular area that only contains the head of the nth person.
  • Step S540 The head area information ⁇ (x n ,y n ,h n ,w n )
  • n 1, 2... of the above-mentioned registered visible light image IMG' RGB and the above-mentioned registered visible light image IMG' RGB . N ⁇ is output to the person gender and age detection step S60.
  • the gender and age detection step S60 of this embodiment uses CNN convolutional neural network model, VGG convolutional neural network model, mini-Xception small full-convolutional neural network model and other neural network models to detect the gender and age of the person.
  • the implementation method specifically uses the mini-Xception small fully convolutional neural network model to detect the gender and age information of the personnel, including the following steps, and the implementation steps are shown in Figure 7:
  • Step S600 Use IMBD-WIKI personnel gender and age data set to train a mini-Xception small fully convolutional neural network model.
  • Step S610 Use the Adience data set to verify the pre-trained mini-Xception small fully convolutional neural network model.
  • Step S620 After the above S50 is read from the image registration area information detecting step the art 'the RGB visible light image and said registration the IMG' RGB visible light image in the IMG art header area information ⁇ (x n, y n, h n ,w n )
  • n 1,2...N ⁇ , and perform standard image preprocessing on the head area of the person, normalize the pixel value in the head area of the person to between 0 and 1, and then the head of the person The size of the area is uniformly scaled, and in this embodiment, the scale is 48x48.
  • Step S630 Input the result of step S620 into the pre-trained gender and age classification model of the personnel, and apply the global average pooling layer and the softmax activation function in the last layer to predict the gender and age of each personnel.
  • n 1,2...N ⁇ ,
  • the gender and age information are output to the person area information mapping step S70.
  • the personnel area information mapping step S70 of the embodiment further includes the following steps, and the implementation steps are shown in FIG. 8:
  • Step S700 After reading the above-described step S30 with registration from the infrared thermal image IMG 'visible after the IFR and reads out the registration image IMG from the step S60' RGB image IMG of visible light of RGB art above registration zone ' Information ⁇ (x n ,y n ,h n ,w n )
  • n 1,2...N ⁇ , gender and age information.
  • Step S710 Since the registered infrared thermal image IMG' IFR and the registered visible light image IMG' RGB form a corresponding relationship and the two images have the same size, the registered visible light image IMG' RGB is The area information ⁇ (x n ,y n ,h n ,w n )
  • n 1,2...N ⁇ is mapped to the above-mentioned registered infrared thermal image IMG' IFR , and the above-mentioned registered infrared thermal image IMG can be obtained 'Personnel area information in IFR ⁇ (x′ n ,y′ n ,h′ n ,w′ n )
  • n 1,2...N ⁇ , where (x′ n ,y′ n ) is the nth person head After the above-mentioned registration of the region, the coordinates of the starting point of the infrared thermal image IMG' IFR image coordinate system, h′ n and w′ n are the height and width of
  • Step S720 After the above registration infrared image IMG 'after the IFR, the above-described registration infrared image IMG' IFR in the art area information ⁇ (x 'n, y' n, h 'n, w' n)
  • n 1,2...N ⁇ and the personnel area information in the registered visible light image IMG' RGB and the above registered visible light image IMG' RGB ⁇ (x n ,y n ,h n ,w n )
  • n 1,2 ...N ⁇ , the gender and age information are output to the person's body surface temperature calculation step S80.
  • the step S80 of calculating the body surface temperature of the embodiment further includes the following steps, and the implementation steps are shown in FIG. 9:
  • Step S800 Read the registered infrared thermal image IMG' IFR and the personnel area information in the registered infrared thermal image IMG' IFR from the personnel area information mapping step S70 ⁇ (x′ n ,y′ n ,h ′ N ,w′ n )
  • n 1,2...N ⁇ and the above-mentioned registered visible light image IMG' RGB and the above-mentioned registered visible light image IMG' RGB personnel area information ⁇ (x n ,y n ,h n ,w n )
  • n 1,2...N ⁇ , gender and age information.
  • Step S810 If the infrared camera device has the function of directly outputting the temperature value of each pixel in the infrared thermal image, perform step S820; if the infrared camera device does not have the function of outputting the temperature value of each pixel in the infrared thermal image, perform step S830 .
  • Step S820 Read the pixel temperature value and calculate the person's body surface temperature. Read the temperature value of each pixel in the infrared thermal image IMG' IFR after the registration from the infrared camera device, and calculate the temperature of all pixels in the head area of the person's head in the infrared thermal image IMG' IFR after the registration. The maximum value or average value of pixel temperature in the head or the whole body area of the person is selected as the body surface temperature. In this embodiment, the maximum temperature in the head area of the person is selected as the body surface temperature of the person. Step S850 is executed after the body surface temperature of the person is calculated.
  • Step S830 Identify the temperature scale information.
  • Methods for recognizing the IMG' IFR temperature scale of the above-mentioned registered infrared thermal image include Tesseract OCR, KNN text recognition and other text recognition algorithms.
  • the KNN text recognition algorithm is used to identify the temperature of the above-mentioned registered infrared thermal image IMG' IFR . The upper and lower limits of the scale.
  • the distance L(x i ,y j ) is sorted in ascending order, the k samples with the smallest distance are selected and the frequency of each category in the k samples is counted, and the category with the highest frequency is used as the predicted classification result.
  • Step S840 Calculate the body surface temperature of the person.
  • each grayscale value corresponds to different temperature information, and there is a linear relationship between them.
  • the temperature value with a grayscale value of 0 pixels corresponds to the lower limit of the temperature scale, and the grayscale value is 255 pixels.
  • the temperature value of corresponds to the upper limit of the temperature scale.
  • T m key x is the upper limit of the temperature scale, which is the highest temperature of the scene
  • T min is the lower limit of the temperature scale, which is the lowest temperature of the scene
  • G (i,j) is the gray value of the pixel (i,j)
  • T (i,j) is The actual temperature value of the pixel. According to the upper and lower limits of the temperature scale and the temperature calculation formula (4), the actual temperature values of all pixels in the head area of the person’s head in the infrared thermal image IMG' IFR after the registration are calculated, and the maximum value in this area is selected as the target Body surface temperature.
  • T (i,j) T min +(T max -T min ) ⁇ (G (i,j) /255) (4)
  • Step S850 Person area information ⁇ (x′ n ,y′ n ,h′ n ,w′ n )
  • n 1, 2...N ⁇ in the infrared thermal image IMG' IFR after the registration and the person's body surface temperature and 'after the above-described registration RGB visible light image IMG' after the above-described visible light image IMG in the art RGB registration area information ⁇ (x n, y n, h n, w n)
  • n 1,2 ... N ⁇ , and gender
  • the age information is output to the information fusion step S90.
  • Information fusion step S90 of the embodiment Read the person area information in the registered infrared thermal image IMG' IFR from the person's body surface temperature calculation step S80 ⁇ (x' n ,y' n ,h' n ,w' n )
  • n 1,2...N ⁇ , the human body surface temperature and the above-mentioned registered visible light image IMG' RGB and the above-mentioned registered visible light image IMG' RGB personnel area information ⁇ (x n ,y n ,h n ,w n )
  • n 1,2...N ⁇ , gender and age information.
  • n 1, 2...N ⁇ , gender and age information and the above-mentioned registered infrared thermal image
  • the person's body surface temperature information in the image IMG' IFR is visually displayed in the above-mentioned registered visible light image IMG' RGB to realize the detection of the person's gender, age and body surface temperature information related to indoor environment control.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Radiation Pyrometers (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

本发明公开了面向室内环境控制的人员信息检测方法与***,***包括图像读取装置模块、图像配准模块、配准后红外热图像读取模块、配准后可见光图像读取模块、人员信息检测模块、人员区域信息映射模块、人员体表温度计算模块和信息融合模块。通过对分辨率、视场角以及拍摄角度不同的可见光图像和红外热图像配准处理,使得配准后可见光图像和红外热图像的像素点对应。然后检测配准后可见光图像中人员区域信息、性别和年龄信息,并将配准后可见光图像中人员区域信息映射到配准后红外热图像中,精确获取配准后红外热图像中人员区域信息,计算出人员体表温度,从而完成与室内环境控制相关的人员性别、年龄以及人员体表温度信息检测。

Description

面向室内环境控制的人员信息检测方法与*** 技术领域
本发明属于目标识别技术及智慧建筑技术领域,具体涉及一种面向室内环境控制的人员信息检测方法与***。
背景技术
近年来,随着信息技术的发展与人民生活水平的提升,人们对生活环境提出了更高的要求,智慧建筑正被重视并发展。智慧建筑通过物联网、云计算等现代信息技术,整合建筑内各项数据并对其进行分析,提供环境舒适、节能环保的建筑。其中,室内环境舒适度控制***是控制建筑内各项设备的中心***,也是智慧建筑的核心。室内环境舒适度控制***可以在商场、车站以及办公室等室内环境中应用,用于提升人员对室内环境的舒适感,能够心情愉悦地进行工作或学习等活动。室内环境舒适度控制***依靠相关传感器采集环境信息,并综合考虑设备的运行效率以及安全状态,控制设备处于最高效最节能的运行状态,从而调节室内环境以满足大多数人们对环境的要求。文献[1]表明,环境的温度、湿度会影响室内人员的工作状态,一个舒适的环境不仅可以在一定程度上减缓室内人员的负面消极情绪,还可以提升将近10%的工作效率,因此控制室内温湿度等环境参数处于舒适范围具有很大的现实意义。目前,室内环境控制***主要由环境感知***、个体信息检测***以及室内设备控制***组成。其中,环境感知***采集室内各项环境信息并上传至服务器中,个体信息检测***采集室内人员的性别、体表温度等个体信息并上传至服务器中,室内设备控制***通过服务器中的环境信息以及人员个体信息控制室内设备的工作运行策略。针对于此,国内外学者展开大量研究,其中文献[2-3]中构建的环境控制***依赖相关传感器在室内指定位置采集获取温度、湿度、空气流速等室内环境信息,构建的室内设备控制***根据热舒适度方程[4-5]建立数学模型,并将室内环境信息输入至该数学模型得到空调设备的控制策略[2,3,6],以达到调整室内环境获得较理想舒适度的目的;文献[7-10]将性别、年龄和体表温度信息作为影响因素与传统的热舒适方程[4-5]结合,对不同性别不同年龄不同体表温度的热舒适度以及实际感受进行研究。结果表明,相较于传统的热舒适度计算方式,将性别、年龄和体表温度信息结合计算得到的热舒适度更符合人员的实际感受。由此可见,性别、年龄和体表温度信息不同的人对舒适环境的要求也不同,为了进一步的考虑个体差异对环境舒适的不同需求,采集人员的年龄、性别和体表温度信息是必不可少的,但目前人的年龄、性别和体表温度信息的采集大多依赖人工记录及测量,不仅耗费大量的人力成本,还存在较大的测量误差,因此,如何自动准确地采集上述人员个体信息是构建室内环境控制***的一个关键问题。
目前,室内人员的体表温度、性别和年龄等信息主要通过对采集的可见光图像(RGB)或红外热图像应用图像处理算法来获取。在基于可见光图像的研究中,人员的性别信息借由人脸检测结果通常可以达到较好的识别精度,例如Ahonen等人[11]表明局部二值模式(LBP)特征非常适合于面部图像的分类,于是一些研究学者将LBP特征与支持向量机(SVM)[12-13]监督学习算法组合用于性别判断;受深度神经网络的启发,Levi等人[14]和Minchul等人[15]使用深度神经网络对人员面部区域图像进行性别判断;但由于可见光图像无法通过颜色、纹理信息反映温度信息,从而无法得到人员的温度信息。在基于红外热图像的研究中,可以及时连续进行测量,并且可以高精度地检测局部温度变化。人员的温度信息借由红外热图像中的颜色信息通常可以达到较好的检测精度,例如Tanda[16]和Quesada等人[17]使用红外热成像仪拍摄人体,从而获取得到包含人体的红外热图像,并通过其获取人员体表温度。部分研究学者利用红外热图像中的边缘信息检测图像中的人员[18-19],但由于红外热图像缺乏细节纹理信息,导致人员的性别和年龄信息难以被识别。此外,一篇名称为热成像体温监控装置、***及方法发明文献(申请号:CN201010273249.1)提供一种热成像体温监控装置和应用***,能够对公共流动场所的人体的体温进行监控,并且实现了非接触、快速且不限制受检人员流动。但是该发明通过红外成像仪所采集的视频来分析监测区域内是否存在人员移动情况以定位流动人员的位置,导致处于静止状态的人无法被检测;由于红外热图像缺少人员的外观纹理细节信息,仅基于红外热图像进行人员检测,会造成人员无法被准确识别,影响后期体温计算结果的检测精度;上述发明提供的***及方法仅限用于人员流动频繁的场景,无法应用于室内等场景,并且不能识别人的性别和年龄信息。
发明内容
本发明要解决的技术问题为:单独使用红外热图像或可见光图像融合方式无法同时获取人员的年龄、性别以及体表温度信息;由于采集场景图像所使用的可见光摄像装置和红外摄像装置的视场角,分辨率以及拍摄时位置角度不同,导致获得的可见光图像和红外热图像的像素点不对应,不存在映射关系,影响图像融合处理效果,降低人员信息检测精度。
针对以上技术问题,本发明采用的技术方案为面向室内环境控制的人员信息检测***,该***通过对分辨率、视场角以及拍摄角度不同的可见光图像和红外热图像配准处理,使得配准后可见光图像和红外热图像的像素点对应。然后检测配准后可见光图像中人员区域信息、性别和年龄信息,并将配准后可见光图像中人员区域信息映射到配准后红外热图像中,精确获取配准后红外热图像中人员区域信息,计算出人员体表温度,从而完成与室内环境控制相关的人员性别、年龄以及人员体表温度信息检测。
本发明代表图如附图1所示,***包括:图像读取装置模块10、图像配准模块20、配准 后红外热图像读取模块30、配准后可见光图像读取模块40、人员信息检测模块50、人员区域信息映射模块60、人员体表温度计算模块70和信息融合模块80。
上述各模块之间的连接关系如下:所述图像读取装置模块10将红外热图像和可见光图像输出到所述图像配准模块20,所述图像配准模块20将配准后红外热图像和配准后可见光图像分别输出到所述配准后红外热图像读取模块30和所述配准后可见光图像读取模块40,所述配准后红外热图像读取模块30将配准后红外热图像输出到所述人员区域信息映射模块60,所述配准后可见光图像读取模块40将配准后可见光图像输出到所述人员信息检测模块50,所述人员信息检测模块50将配准后可见光图像以及配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息输出到所述人员区域信息映射模块60,所述人员区域信息映射模块60将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息及配准后红外热图像与配准后红外热图像中人员头部或人员全身区域信息输出到所述人员体表温度计算模块70,所述人员体表温度计算模块70将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息输出到所述信息融合模块80,所述信息融合模块80融合配准后可见光图像中人员头部或人员全身区域、性别和年龄信息与配准后红外热图像中人员体表温度信息并可视化显示在可见光图像或红外热图像中,实现与室内环境控制相关的人员性别、年龄以及体表温度信息检测。其中所述各模块的功能:
图像读取装置模块10:图像读取装置模块10由红外摄像装置和可见光摄像装置构成,所述图像读取装置模块10能够同时拍摄并获取红外热图像和可见光图像,并将所述红外热图像和可见光图像输出到图像配准模块20。
图像配准模块20:从所述图像读取装置模块10读取红外热图像和可见光图像。通常可见光摄像装置视场角大于红外摄像装置,导致可见光图像的成像范围大于红外热图像,所以将红外热图像作为基准图像,可见光图像作为待配准图像,利用立体视觉成像原理以及红外摄像装置和可见光摄像装置的视场角、分辨率与成像尺寸之间的关系,实现可见光图像与红外热图像精确配准。配准后可见光图像与配准后红外热图像的图像尺寸一致并且像素点相对应。将所述配准后红外热图像和配准后可见光图像分别输出到配准后红外热图像读取模块30和配准后可见光图像读取模块40。
配准后红外热图像读取模块30:用于读取所述图像配准模块20配准的红外热图像,并将配准后红外热图像输出到人员区域信息映射模块60。
配准后可见光图像读取模块40:用于读取所述图像配准模块20配准的可见光图像,并将配准后可见光图像输出到人员信息检测模块50。
人员信息检测模块50:从所述配准后可见光图像读取模块40读取配准后可见光图像。将配准后可见光图像输入到深度学习模型比如SSD、Faster R-CNN、YOLO或SPP-net等,以获得可见光图像中人员头部或人员全身区域信息,然后将获取的人员头部或人员全身区域输入到用于检测人员性别和年龄的深度学习网络模型中,识别出配准后可见光图像中人员性别与年龄信息。其中人员性别和年龄检测可以使用CNN卷积神经网络模型、VGG卷积神经网络模型或mini-Xception小型全卷积神经网络模型等多种神经网络模型。将配准后可见光图像以及配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息输出到人员区域信息映射模块60。
人员区域信息映射模块60:从所述人员信息检测模块50读取配准后可见光图像和配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及从配准后红外热图像数据读取模块30读取配准后红外热图像。由于配准后红外热图像与配准后可见光图像的像素点形成对应关系并且两张图像尺寸一致,所以将配准后可见光图像中人员区域信息直接映射到配准后红外热图像中,即可准确获得配准后红外热图像中人员区域信息。将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息及配准后红外热图像与配准后红外热图像中人员头部或人员全身区域信息输出到人员体表温度计算模块70。
人员体表温度计算模块70:从所述人员区域信息映射模块60读取配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息、配准后红外热图像以及配准后红外热图像中人员头部或人员全身区域。读取配准后红外热图像中每个像素点的温度值,并计算出配准后红外热图像中人员头部或人员全身区域内所有像素点的温度值,选取人员头部或人员全身区域内像素点温度最大值或平均值作为此人体表温度。将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息输出到信息融合模块80。
信息融合模块80:从所述人员体表温度计算模块70读取配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息。融合配准后可见光图像中人员头部或人员全身区域、性别和年龄信息与配准后红外热图像中人员体表温度信息并可视化显示在可见光图像或红外热图像中,实现与室内环境控制相关的人员性别、年龄以及体表温度信息检测。
本发明原理:
首先融合可见光图像与红外热图像,根据可见光图像检测场景中人员所在区域、性别及年龄,以及根据红外热图像获取温度信息,实现同时检测场景中所有人员性别、年龄和温度信息的功能。
其次利用立体视觉成像原理计算红外摄像装置与可见光摄像装置产生的视差以及利用红外摄像装置与可见光摄像装置的视场角、分辨率与成像尺寸之间的关系,实现可见光图像与红外热图像精确配准。
然后利用配准后的可见光图像与红外热图像像素点对应的关系,将可见光图像中人员区域信息映射到配准后红外热图像,精确获取配准后红外热图像中人员区域信息,进而检测出红外热图像中人员体表温度。
与现有技术相比,本发明提出的人员信息检测方法可用于多种环境场景(如会议室,商场,安检口等)检测人员位置、性别、年龄和温度信息,不再需要人工测量采集场景中人员信息,节约了大量的人力成本。并且本发明对分辨率、视场角以及拍摄角度不同的可见光图像和红外热图像进行配准处理,使配准后可见光图与红外热图像像素点相对应,从而有效地融合了可见光图像的色彩、纹理信息和红外热图像的温度信息,准确地检测场景中出现的所有人员性别、年龄和温度信息,具有实时性好,检测精度高等特点。
附图说明
图1本发明提供的一种面向室内环境控制的人员信息检测方法与***的代表图。
图2本发明实施例提供的一种面向室内环境控制的人员信息检测方法的流程图。
图3本发明实施例提供的一种读取红外热成像与可见光成像数据流程图。
图4本发明实施例提供的一种可见光图像与红外热图像配准处理的流程图。
图5本发明实施例提供的一种立体视觉成像原理图。
图6本发明实施例提供的一种图像中人员区域信息检测的流程图。
图7本发明实施例提供的一种人员性别年龄检测的流程图。
图8本发明实施例提供的一种人员区域信息映射的流程图。
图9本发明实施例提供的一种人员体表温度计算的流程图。
具体实施方式
为了能够更加清楚的说明本发明的方法流程,下面结合具体实施例进行进一步描述,所说明的实施例仅用于说明本发明的技术方案,实施例设置的参数数值并非限定本发明。
本发明实施方式的流程图如图2所示,包括以下步骤:
步骤S10:读取红外热成像数据与可见光成像数据;
步骤S20:可见光图像与红外热图像配准处理;
步骤S30:读取配准后红外热图像;
步骤S40:读取配准后可见光图像;
步骤S50:图像中人员区域信息检测;
步骤S60:人员性别年龄检测;
步骤S70:人员区域信息映射;
步骤S80:人员体表温度计算;
步骤S90:信息融合。
在本实施方式中红外热图像记为IMG IFR,可见光图像记为IMG RGB,配准后红外热图像记为IMG' IFR,配准后可见光图像记为IMG' RGB
实施方式的读取红外热成像与可见光成像数据步骤S10还包括以下步骤,实施步骤如图3所示:
步骤S100:从红外摄像装置和可见光摄像装置分别读取红外热成像数据与可见光成像数据。
步骤S110:判断读取的红外热成像数据与可见光成像数据为视频类型还是图像类型。若待检测数据是视频类型,则进行步骤S120;若待检测数据是图像类型,则进行步骤S130。
步骤S120:对视频数据进行分帧处理,将视频数据转换为图像数据。
步骤S130:将红外热图像IMG IFR与可见光图像IMG RGB出到步骤S20。
实施方式的可见光图像与红外热图像配准处理步骤S20还包括以下步骤,实施步骤如图4所示:
红外摄像装置与可见光摄像装置的位置一般是垂直配置或水平配置。本实施例中的红外摄像装置与可见光摄像装的位置为垂直配置。
步骤S200:从所述读取红外热成像数据与可见光成像数据步骤S10读取上述可见光图像IMG RGB和上述红外热图像IMG IFR
步骤S210:利用立体视觉成像原理计算红外摄像装置与可见光摄像装置产生的视差d,用于定位红外热图像中心点对应在可见光图像的位置坐标。在实际应用中,红外摄像装置和可见光摄像装置紧密相邻,因而视差d受拍摄距离的影响很小。计算视差d的方法依据立体视觉成像原理,如图5所示,首先根据实际应用场景,确定摄像装置到场景的距离l;然后在距离摄像装置l处放置一个实际物体观测点p,使得p点在红外摄像装置的光轴线上,且p点经过红外摄像装置的光心O 1成像在红外摄像装置感光器成像面的中心点x 1,然后将p点经过可见光摄像装置的光心O 2成像在可见光摄像装置感光器成像面的位置记为p 2,点p 2到可见光摄像装置感光器成像面中心点x 2的距离即为视差d;根据ΔPO 1O 2与ΔO 2x 2p 2三角形相似原理,即可计算视差d:d=f 2B/l,其中f 2为可见光摄像装置的焦距,B为红外摄像装置光心O 1与可见光摄像装置光心O 2之间的距离。通常可见光摄像装置视场角大于红外摄像装置,导致可见光图像的成像范围大于红外热图像,所以将红外热图像IMG IFR作为基准图像,可见 光图像IMG RGB作为待配准图像,根据红外摄像装置与可见光摄像装置的视差d,即可定位红外热图像IMG IFR中心点对应在可见光图像IMG RGB的位置坐标(a,b),其中a=m/2,b=n/2±d/μ h,m*n为可见光摄像装置分辨率,μ h为可见光摄像装置感光器中每个像元的高度。同理,当红外摄像装置与可见光摄像装置的位置为水平配置时,定位红外热图像中心点对应在可见光图像的位置坐标(a,b),其中a=m/2±d/μ w,b=n/2,μ w为可见光摄像装置感光器中每个像元的宽度。
步骤S220:以IMG RGB的坐标(a,b)为中心提取尺寸大小为X*Y像素的区域,其中X*Y根据红外摄像装置与可见光摄像装置的视场角计算所得,计算方法如公式(1)和(2)所示,式中m*n为可见光摄像装置的分辨率,α 1、β 1分别为红外摄像装置的水平视场角和垂直视场角,α 2、β 2分别为可见光摄像装置的水平视场角和垂直视场角,f 1、f 2分别为红外摄像装置和可见光摄像装置的焦距。
Figure PCTCN2020080990-appb-000001
Figure PCTCN2020080990-appb-000002
步骤S230:将步骤S210提取的X*Y区域分辨率调整至与红外热图像相同的分辨率,所得配准后可见光图像IMG' RGB与配准后红外热图像IMG' IFR精准匹配,实现配准后可见光图像IMG' RGB与配准后红外热图像IMG' IFR的尺寸一致并且像素点相对应。
步骤S240:将上述配准后红外热图像IMG' IFR和上述配准后可见光图像IMG' RGB分别输出到读取配准后红外热图像步骤S30和读取配准后可见光图像步骤S40。
本实施方式的读取配准后红外热图像步骤S30:从所述可见光图像与红外热图像配准处理步骤S20读取配准后红外热图像IMG' IFR,将配准后红外热图像IMG' IFR输出到人员区域信息映射步骤S70。
本实施方式的读取配准后可见光图像步骤S40:从所述可见光图像与红外热图像配准处理步骤S20读取上述配准后可见光图像IMG' RGB,将配准后可见光图像IMG' RGB输出到图像中人员区域信息检测步骤S50。
本实施方式的图像中人员区域信息检测步骤S50能够通过深度学习模型比如SSD,Faster R-CNN,YOLO,SPP-net等检测上述配准后可见光图像IMG' RGB中人员头部或人员全身区域,本实施方式使用Faster R-CNN深度学习网络模型检测上述配准后可见光图像IMG' RGB中人员头部区域,包括以下步骤,实施步骤如图6所示:
步骤S500:调整Faster R-CNN网络模型,在原始Faster R-CNN目标检测模型的基础上,采用ResNet-50深层网络代替VGG-16网络作为骨架网络提取更深层特征。
步骤S510:在Hollywood Heads大型人头数据集上微调训练Faster R-CNN头部检测器,提高检测结果准确率。
步骤S520:根据采集的数据集以及头部特点,自适应的调整区域提议网络(RPN)算法中Anchors先验框的尺寸,训练Faster R-CNN的区域提议网络模型。调整Anchors先验框的尺度为128和256,比例为1:1,1:2和2:1,共6种不同anchors尺寸,以适应所采集图像中的头部检测,减少冗余计算。
步骤S530:从所述读取配准后可见光图像步骤S40读取上述配准后可见光图像IMG' RGB并将其输入到已训练的Faster R-CNN头部检测器中,以获取上述配准后可见光图像IMG' RGB中人员的头部区域信息{(x n,y n,h n,w n)|n=1,2…N},其中N为检测的人员数量,上述配准后可见光图像IMG' RGB的图像坐标系以图像左上角为原点,(x n,y n)为第n个人员头部区域在上述配准后可见光图像IMG' RGB图像坐标系的起始点坐标,h n和w n为此区域的高度和宽度,(x n,y n,h n,w n)表示仅包含第n个人员头部的矩形区域。
步骤S540:将上述配准后可见光图像IMG' RGB以及上述配准后可见光图像IMG' RGB中人员头部区域信息{(x n,y n,h n,w n)|n=1,2…N}输出到人员性别年龄检测步骤S60。
本实施方式的人员性别年龄检测步骤S60使用CNN卷积神经网络模型,VGG卷积神经网络模型,mini-Xception小型全卷积神经网络模型等多种神经网络模型对人员性别和年龄进行检测,本实施方式具体地使用mini-Xception小型全卷积神经网络模型检测人员性别和年龄信息,包括以下步骤,实施步骤如图7所示:
步骤S600:使用IMBD-WIKI人员性别和年龄数据集训练mini-Xception小型全卷积神经网络模型。
步骤S610:使用Adience数据集对预训练的mini-Xception小型全卷积神经网络模型进行验证。
步骤S620:从所述图像中人员区域信息检测步骤S50读取上述配准后可见光图像IMG' RGB以及上述配准后可见光图像IMG' RGB中人员头部区域信息{(x n,y n,h n,w n)|n=1,2…N},并对人员头部区域进行标准图像预处理,把人员头部区域内像素值归一化到0到1之间,然后将人员头部区域的尺寸统一缩放,本实施例缩放为48x48。
步骤S630:将步骤S620的结果输入到预训练的人员性别年龄分类模型,在最后一层应用全局平均池化层和softmax激活函数输出预测每个人员的性别和年龄。将上述配准后可见光图像IMG' RGB及上述配准后可见光图像IMG' RGB中人员头部区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息输出到人员区域信息映射步骤S70。
实施方式的人员区域信息映射步骤S70还包括以下步骤,实施步骤如图8所示:
步骤S700:从所述步骤S30读取上述配准后红外热图像IMG' IFR及从所述步骤S60读取上述配准后可见光图像IMG' RGB与上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息。
步骤S710:由于上述配准后红外热图像IMG' IFR与上述配准后可见光图像IMG' RGB的像素点形成对应关系并且两张图像尺寸一致,所以将上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}映射到上述配准后红外热图像IMG' IFR中,即可获取上述配准后红外热图像IMG' IFR中人员区域信息{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N},其中(x′ n,y′ n)为第n个人员头部区域在上述配准后红外热图像IMG' IFR图像坐标系的起始点坐标,h′ n和w′ n为此区域的高度和宽度,(x′ n,y′ n,h′ n,w′ n)表示仅包含第n个人员头部的矩形区域。由于上述配准后红外热图像IMG' IFR与上述配准后可见光图像IMG' RGB的像素点形成对应关系且两张图像尺寸一致,并且两张图像坐标系均以图像左上角为原点,所以{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N}与{(x n,y n,h n,w n)|n=1,2…N}相等。
步骤S720:将上述配准后红外热图像IMG' IFR、上述配准后红外热图像IMG' IFR中人员区域信息{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N}及上述配准后可见光图像IMG' RGB与上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息输出到人员体表温度计算步骤S80。
实施方式的计算人员体表温度步骤S80还包括以下步骤,实施步骤如图9所示:
步骤S800:从所述人员区域信息映射步骤S70读取上述配准后红外热图像IMG' IFR、上述配准后红外热图像IMG' IFR中人员区域信息{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N}及上述配准后可见光图像IMG' RGB与上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息。
步骤S810:如果红外摄像装置具备直接输出红外热图像中每个像素点温度值的功能则执行步骤S820,如果红外摄像装置不具备输出红外热图像中每个像素点温度值的功能则执行步骤S830。
步骤S820:读取像素温度值并计算人员体表温度。从红外摄像装置接读取上述配准后红外热图像IMG' IFR中每个像素点的温度值,并计算出上述配准后红外热图像IMG' IFR中人员头部区域内所有像素点的温度值,选取人员头部或人员全身区域内像素点温度最大值或平均值作为此人体表温度,本实施例选取人员头部区域内的温度最大值作为人员的体表温度。计算出人员体表温度后执行步骤S850。
步骤S830:识别温度标尺信息。识别上述配准后红外热图像IMG' IFR温度标尺的方法有Tesseract OCR,KNN文本识别等多种文本识别算法,本实施例使用KNN文本识别算法识别 上述配准后红外热图像IMG' IFR的温度标尺上下限值。首先截取配准后红外热图像IMG' IFR温度标尺上下限对应的感兴趣区域,进行灰度二值化处理,设置图像二值化像素阈值,将分割感兴趣区域中的每一位数字,然后分别提取数字0-9和小数点‘.’的图像特征,制作训练数据集。当有新的实例输入时,分别计算待识别图像特征与数据集中所有样本特征之间的欧氏距离,如公式(3)所示,
Figure PCTCN2020080990-appb-000003
代表待识别图像特征,
Figure PCTCN2020080990-appb-000004
Figure PCTCN2020080990-appb-000005
表示数据集中第j个样本特征,其中j=1,2,…,Q,Q为数据集样本总数,L(x i,y j)表示待识别图像特征x i与样本特征y j之间的欧氏距离。最后对距离L(x i,y j)进行递增次序排序,选取距离最小的k个样本并统计k个样本中每个类别出现的频率,将频率最高的类别作为预测分类结果。
Figure PCTCN2020080990-appb-000006
步骤S840:计算人员体表温度。在灰度红外热图像中,每个灰度值对应不同的温度信息,它们之间存在一种线性关系,灰度值为0像素点的温度值对应温度标尺下限,灰度值为255像素点的温度值对应温度标尺上限。利用温标尺度信息与灰度值的对应关系,计算出红外热图像中任一像素点温度的方法,如公式(4)所示。T m鑰x为温度标尺上限即场景最高温度,T min为温度标尺下限即场景最低温度,G (i,j)为像素点(i,j)的灰度值,T (i,j)为该像素点的实际温度值。依据温度标尺上下限值,并利用温度计算公式(4)计算得到上述配准后红外热图像IMG' IFR中人员头部区域内所有像素点的实际温度值,选取该区域内的最大值作为目标的体表温度。
T (i,j)=T min+(T max-T min)×(G (i,j)/255)   (4)
步骤S850:将上述配准后红外热图像IMG' IFR中人员区域信息{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N}、人员体表温度及上述配准后可见光图像IMG' RGB与上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息输出到信息融合步骤S90。
实施方式的信息融合步骤S90:从所述人员体表温度计算步骤S80读取上述配准后红外热图像IMG' IFR中人员区域信息{(x′ n,y′ n,h′ n,w′ n)|n=1,2…N}、人员体表温度及上述配准后可见光图像IMG' RGB与上述配准后可见光图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息。融合上述配准后红外热图像IMG' RGB中人员区域信息{(x n,y n,h n,w n)|n=1,2…N}、性别和年龄信息与上述配准后红外热图像IMG' IFR中人员体表温度信息并可视化显示在上述配准后可见光图像IMG' RGB中,实现与室内环境控制相关的人员性别、年龄以及体表温度信息检测。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在发明的保护范围之内。
参考文献
[1]Vimalanathan K,Babu T R.The effect ofindoor office environment on the work performance,health and well-being of office workers[J].Journal of Environmental Health Science and Engineering,2014,12(1):113.
[2]Cheng Z,Shein W W,Tan Y,et al.Energy efficient thermal comfort control for cyber-physicalhome system[C]//IEEE International Conference on Smart Grid Communications.IEEE,2013.
[3]Ray,Pratim P.An Internet of Things based approach to thermal comfort measurement and monitoring[C]//3rd International Conference on Advanced Computing and Communication Systems(ICACCS).2016:1-7.
[4]ANSI/ASHRAE(2017)Standard 55:2017,Thermal Environmental Conditions for Human Occupancy.ASHRAE,Atlanta.
[5]Moderate thermal environments-Determination of the PMV and PPD indices and specification ofthe conditions for thermal confort[J].Iso,1994.
[6]Orosa J.A new modelling methodology to control HVAC systems[J].Expert Systems with Applications,2011,38(4):4505-4513.
[7]Maula H,Hongisto V,Ostman,L,et al.The effect of slightly warm temperature on work performance and comfort in open-plan offices-a laboratory study[J].Indoor Air,2016,26(2):286-297.
[8]A review ofhuman thermal comfort in the built environment[J].Energy and Buildings,2015,105:S0378778815301638.
[9]Lan,Li,et al."Investigation of gender difference in thermal comfort for Chinese people."European Journal ofApplied Physiology 102.4(2008):471-480.
[10]Chow,T.T.,et al."Thermal sensation of Hong Kong people with increased air speed,temperature and humidity in air-conditioned environment."Building and Environment 45.10(2010):2177-2183.
[11]T.Ahonen,A.Hadid and M.Pietikainen,"Face Description with Local Binary Patterns:Application to Face Recognition,"in IEEE Transactions on Pattern Analysis&Machine Intelligence,vol.28,no.12,pp.2037-2041,2007.
[12]Hadid A,Pietikainen M.Combining appearance and motion for face and gender recognition from videos[J].Pattern Recognition,2009,42(11):2818-2827.
[13]Shan C.Learning local binary patterns for gender classification on real-world face images[M].Elsevier Science Inc.2012.
[14]Levi G,Hassncer T.Age and gender classification using convolutional neural networks[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).IEEE Computer Society,2015:34-42.
[15]Shin M,Seo J H,Kwon D S.Face image-based age and gender estimation with consideration of ethnic difference[C]//IEEE International Symposium on Robot&Human Interactive Communication.IEEE,2017.
[16]Tanda,Giovanni.Skin temperature measurements by infrared thermography during running exercise[J].Experimental Thermal and Fluid Science,2016,71:103-113.
[17]Effect of perspiration on skin temperature measurements by infrared thermography and contact thermometry during aerobic cycling[J].Infrared Physics&Technology,2015,72:68-76.
[18]Biswas S K,Milanfar P.Linear support tensor machine with LSK channels:Pedestrian detection in thermal infrared images[J].IEEE transactions on image processing,2017,26(9):4229-4242.
[19]Lin C F,Chen C S,Hwang W J,et al.Novel outline features for pedestrian detection system  with thermal images[J].Pattern Recognition,2015,48(11):3440-3450.

Claims (10)

  1. 面向室内环境控制的人员信息检测***,其特征在于:该***包括:图像读取装置模块(10)、图像配准模块(20)、配准后红外热图像读取模块(30)、配准后可见光图像读取模块(40)、人员信息检测模块(50)、人员区域信息映射模块(60)、人员体表温度计算模块(70)和信息融合模块(80);
    所述图像读取装置模块(10)将红外热图像和可见光图像输出到所述图像配准模块(20),所述图像配准模块(20)将配准后红外热图像和配准后可见光图像分别输出到所述配准后红外热图像读取模块(30)和所述配准后可见光图像读取模块(40),所述配准后红外热图像读取模块(30)将配准后红外热图像输出到所述人员区域信息映射模块(60),所述配准后可见光图像读取模块(40)将配准后可见光图像输出到所述人员信息检测模块(50),所述人员信息检测模块(50)将配准后可见光图像以及配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息输出到所述人员区域信息映射模块(60),所述人员区域信息映射模块(60)将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息及配准后红外热图像与配准后红外热图像中人员头部或人员全身区域信息输出到所述人员体表温度计算模块(70),所述人员体表温度计算模块(70)将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息输出到所述信息融合模块(80),所述信息融合模块(80)融合配准后可见光图像中人员头部或人员全身区域、性别和年龄信息与配准后红外热图像中人员体表温度信息并可视化显示在可见光图像或红外热图像中,实现与室内环境控制相关的人员性别、年龄以及体表温度信息检测。
  2. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:图像读取装置模块(10):图像读取装置模块(10)由红外摄像装置和可见光摄像装置构成,所述图像读取装置模块(10)能够同时拍摄并获取红外热图像和可见光图像,并将所述红外热图像和可见光图像输出到图像配准模块(20)。
  3. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:图像配准模块(20):从所述图像读取装置模块(10)读取红外热图像和可见光图像;通常可见光摄像装置视场角大于红外摄像装置,导致可见光图像的成像范围大于红外热图像,所以将红外热图像作为基准图像,可见光图像作为待配准图像,利用立体视觉成像原理以及红外摄像装置和可见光摄像装置的视场角、分辨率与成像尺寸之间的关系,实现可见光图像与红外热图像精确配准;配准后可见光图像与配准后红外热图像的图像尺寸一致并且像素点相对应;将所述配准后红外热图像和配准后可见光图像分别输出到配准后红外热图像读取模块(30)和 配准后可见光图像读取模块(40)。
  4. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:配准后红外热图像读取模块(30):用于读取所述图像配准模块(20)配准的红外热图像,并将配准后红外热图像输出到人员区域信息映射模块(60)。
  5. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:配准后可见光图像读取模块(40):用于读取所述图像配准模块(20)配准的可见光图像,并将配准后可见光图像输出到人员信息检测模块(50)。
  6. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:人员信息检测模块(50):从所述配准后可见光图像读取模块(40)读取配准后可见光图像;将配准后可见光图像输入到深度学习模型,以获得可见光图像中人员头部或人员全身区域信息,然后将获取的人员头部或人员全身区域输入到用于检测人员性别和年龄的深度学习网络模型中,识别出配准后可见光图像中人员性别与年龄信息;其中人员性别和年龄检测使用CNN卷积神经网络模型、VGG卷积神经网络模型或mini-Xception小型全卷积神经网络模型;将配准后可见光图像以及配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息输出到人员区域信息映射模块(60)。
  7. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:人员区域信息映射模块(60):从所述人员信息检测模块(50)读取配准后可见光图像和配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及从配准后红外热图像数据读取模块30读取配准后红外热图像;由于配准后红外热图像与配准后可见光图像的像素点形成对应关系并且两张图像尺寸一致,所以将配准后可见光图像中人员区域信息直接映射到配准后红外热图像中,即可准确获得配准后红外热图像中人员区域信息;将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息及配准后红外热图像与配准后红外热图像中人员头部或人员全身区域信息输出到人员体表温度计算模块(70)。
  8. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:人员体表温度计算模块(70):从所述人员区域信息映射模块(60)读取配准后可见光图像、配准后可见光图像中人员头部或人员全身区域信息、性别和年龄信息、配准后红外热图像以及配准后红外热图像中人员头部或人员全身区域;读取配准后红外热图像中每个像素点的温度值,并计算出配准后红外热图像中人员头部或人员全身区域内所有像素点的温度值,选取人员头部或人员全身区域内像素点温度最大值或平均值作为此人体表温度;将配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息输出到信息融合模块(80)。
  9. 根据权利要求1所述的面向室内环境控制的人员信息检测***,其特征在于:信息融合模块(80):从所述人员体表温度计算模块(70)读取配准后可见光图像、配准后可见光图像中人员头部或人员全身区域、性别和年龄信息以及配准后红外热图像中人员体表温度信息;融合配准后可见光图像中人员头部或人员全身区域、性别和年龄信息与配准后红外热图像中人员体表温度信息并可视化显示在可见光图像或红外热图像中,实现与室内环境控制相关的人员性别、年龄以及体表温度信息检测。
  10. 利用权利要求1所述***进行的面向室内环境控制的人员信息检测方法,其特征在于:该方法包括以下步骤,
    步骤S10:读取红外热成像数据与可见光成像数据;
    步骤S20:可见光图像与红外热图像配准处理;
    步骤S30:读取配准后红外热图像;
    步骤S40:读取配准后可见光图像;
    步骤S50:图像中人员区域信息检测;
    步骤S60:人员性别年龄检测;
    步骤S70:人员区域信息映射;
    步骤S80:人员体表温度计算;
    步骤S90:信息融合。
PCT/CN2020/080990 2019-04-25 2020-03-25 面向室内环境控制的人员信息检测方法与*** WO2020215961A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910336741.X 2019-04-25
CN201910336741.XA CN110110629B (zh) 2019-04-25 2019-04-25 面向室内环境控制的人员信息检测方法与***

Publications (1)

Publication Number Publication Date
WO2020215961A1 true WO2020215961A1 (zh) 2020-10-29

Family

ID=67486592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/080990 WO2020215961A1 (zh) 2019-04-25 2020-03-25 面向室内环境控制的人员信息检测方法与***

Country Status (2)

Country Link
CN (1) CN110110629B (zh)
WO (1) WO2020215961A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536885A (zh) * 2021-04-02 2021-10-22 西安建筑科技大学 一种基于YOLOv3-SPP的人体行为识别方法及***
CN116563283A (zh) * 2023-07-10 2023-08-08 山东联兴能源集团有限公司 基于图像处理的蒸汽锅炉气体泄露检测方法及检测装置

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110629B (zh) * 2019-04-25 2021-05-28 北京工业大学 面向室内环境控制的人员信息检测方法与***
US11875544B2 (en) 2020-04-30 2024-01-16 Teledyne Flir Commercial Systems, Inc. Annotation of infrared images for machine learning using beamsplitter-based camera system and methods
WO2021226554A1 (en) * 2020-05-08 2021-11-11 Flir Systems Ab Dual-band temperature detection systems and methods
CN111739069B (zh) * 2020-05-22 2024-04-26 北京百度网讯科技有限公司 图像配准方法、装置、电子设备及可读存储介质
US11715326B2 (en) * 2020-06-17 2023-08-01 Microsoft Technology Licensing, Llc Skin tone correction for body temperature estimation
CN113834571A (zh) * 2020-06-24 2021-12-24 杭州海康威视数字技术股份有限公司 一种目标测温方法、装置及测温***
CN111780877A (zh) * 2020-07-06 2020-10-16 广东智芯光电科技有限公司 一种基于摄像头测物体温度的方法和***
CN112085771B (zh) * 2020-08-06 2023-12-05 深圳市优必选科技股份有限公司 图像配准方法、装置、终端设备及计算机可读存储介质
CN113139413A (zh) * 2020-08-07 2021-07-20 西安天和防务技术股份有限公司 人员管理方法、装置及电子设备
CN112874463A (zh) * 2021-03-01 2021-06-01 长安大学 一种儿童被困高温车内的保护与报警***及方法
EP4305393A1 (en) * 2021-03-09 2024-01-17 C2RO Cloud Robotics Inc. System and method for thermal screening
CN113237556A (zh) * 2021-05-18 2021-08-10 深圳市沃特沃德信息有限公司 测温方法、装置和计算机设备
CN113792592B (zh) * 2021-08-09 2024-05-07 深圳光启空间技术有限公司 图像采集处理方法和图像采集处理装置
CN113370745A (zh) * 2021-06-23 2021-09-10 曼德电子电器有限公司 空调控制方法、装置、存储介质及电子设备
CN114627539A (zh) * 2022-02-15 2022-06-14 华侨大学 一种热舒适度预测方法、***及空调调节方法、装置
CN114564058B (zh) * 2022-02-24 2023-05-26 上海莘阳新能源科技股份有限公司 一种基于物联网的房屋室内环境监测智能调控管理***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881239A (zh) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 基于图像识别的广告投播***及方法
CN107560123A (zh) * 2017-10-17 2018-01-09 黄晶 一种室内健康监测及小气候控制方法及***
CN109410252A (zh) * 2018-12-20 2019-03-01 合肥英睿***技术有限公司 一种热像设备
CN110110629A (zh) * 2019-04-25 2019-08-09 北京工业大学 面向室内环境控制的人员信息检测方法与***

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9852324B2 (en) * 2015-12-08 2017-12-26 Intel Corporation Infrared image based facial analysis
CN108921100B (zh) * 2018-07-04 2020-12-01 武汉高德智感科技有限公司 一种基于可见光图像与红外图像融合的人脸识别方法及***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881239A (zh) * 2011-07-15 2013-01-16 鼎亿数码科技(上海)有限公司 基于图像识别的广告投播***及方法
CN107560123A (zh) * 2017-10-17 2018-01-09 黄晶 一种室内健康监测及小气候控制方法及***
CN109410252A (zh) * 2018-12-20 2019-03-01 合肥英睿***技术有限公司 一种热像设备
CN110110629A (zh) * 2019-04-25 2019-08-09 北京工业大学 面向室内环境控制的人员信息检测方法与***

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536885A (zh) * 2021-04-02 2021-10-22 西安建筑科技大学 一种基于YOLOv3-SPP的人体行为识别方法及***
CN116563283A (zh) * 2023-07-10 2023-08-08 山东联兴能源集团有限公司 基于图像处理的蒸汽锅炉气体泄露检测方法及检测装置
CN116563283B (zh) * 2023-07-10 2023-09-08 山东联兴能源集团有限公司 基于图像处理的蒸汽锅炉气体泄露检测方法及检测装置

Also Published As

Publication number Publication date
CN110110629B (zh) 2021-05-28
CN110110629A (zh) 2019-08-09

Similar Documents

Publication Publication Date Title
WO2020215961A1 (zh) 面向室内环境控制的人员信息检测方法与***
CN109819208B (zh) 一种基于人工智能动态监控的密集人群安防监控管理方法
WO2020177498A1 (zh) 一种基于姿态估计的非侵入式人体热舒适检测方法及***
CN109598242B (zh) 一种活体检测方法
CN107256377B (zh) 用于检测视频中的对象的方法、设备和***
CN107766819B (zh) 一种视频监控***及其实时步态识别方法
CN104036236B (zh) 一种基于多参数指数加权的人脸性别识别方法
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN113139479B (zh) 一种基于光流和rgb模态对比学习的微表情识别方法及***
Aryal et al. Skin temperature extraction using facial landmark detection and thermal imaging for comfort assessment
CN111091075B (zh) 人脸识别方法、装置、电子设备及存储介质
EP3398111B1 (en) Depth sensing based system for detecting, tracking, estimating, and identifying occupancy in real-time
CN105022999A (zh) 一种人码伴随实时采集***
CN106570471B (zh) 基于压缩跟踪算法的尺度自适应多姿态人脸跟踪方法
WO2021217764A1 (zh) 一种基于偏振成像的人脸活体检测方法
CN107862713A (zh) 针对轮询会场的摄像机偏转实时检测预警方法及模块
CN114894337B (zh) 一种用于室外人脸识别测温方法及装置
CN112633217A (zh) 基于三维眼球模型计算视线方向的人脸识别活体检测方法
CN114612933B (zh) 单目社交距离检测追踪方法
CN103020655A (zh) 一种基于单训练样本人脸识别的远程身份认证方法
CN112541403A (zh) 一种利用红外摄像头的室内人员跌倒检测方法
CN111652018B (zh) 一种人脸注册方法和认证方法
Wei et al. A low-cost and scalable personalized thermal comfort estimation system in indoor environments
Liu et al. Vision-based individual factors acquisition for thermal comfort assessment in a built environment
Hou et al. A low-cost in-situ system for continuous multi-person fever screening

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20795797

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20795797

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20795797

Country of ref document: EP

Kind code of ref document: A1