CN117346285B - Indoor heating and ventilation control method, system and medium - Google Patents

Indoor heating and ventilation control method, system and medium Download PDF

Info

Publication number
CN117346285B
CN117346285B CN202311644580.3A CN202311644580A CN117346285B CN 117346285 B CN117346285 B CN 117346285B CN 202311644580 A CN202311644580 A CN 202311644580A CN 117346285 B CN117346285 B CN 117346285B
Authority
CN
China
Prior art keywords
indoor
window
angle
module
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311644580.3A
Other languages
Chinese (zh)
Other versions
CN117346285A (en
Inventor
周渝锋
成孝刚
胡鑫涛
许曹鑫
刘晓龙
褚舒畅
张艳彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202311644580.3A priority Critical patent/CN117346285B/en
Publication of CN117346285A publication Critical patent/CN117346285A/en
Application granted granted Critical
Publication of CN117346285B publication Critical patent/CN117346285B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/30Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
    • F24F11/46Improving electric energy efficiency or saving
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/62Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
    • F24F11/63Electronic processing
    • F24F11/64Electronic processing using pre-stored data
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/70Control systems characterised by their outputs; Constructional details thereof
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/88Electrical aspects, e.g. circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/10Temperature
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/20Humidity
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/30Velocity
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/50Air quality properties
    • F24F2110/65Concentration of specific substances or contaminants
    • F24F2110/70Carbon dioxide
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2120/00Control inputs relating to users or occupants
    • F24F2120/20Feedback from users
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B30/00Energy efficient heating, ventilation or air conditioning [HVAC]
    • Y02B30/70Efficient control or regulation technologies, e.g. for control of refrigerant flow, motor or heating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Chemical & Material Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an indoor heating and ventilation control method, a system and a medium, wherein the method comprises the following steps: acquiring indoor video data and environmental parameters; inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating to obtain the indoor window opening degree according to the window segmentation mask images; calculating a backlight stream sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight stream sequence and the human skeleton key point sequence into a trained indoor personnel behavior recognition model IDARM to obtain indoor personnel thermal comfort behaviors; and calculating an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters. According to the invention, the heating and ventilation system is controlled and regulated according to the indoor environment parameters, the window opening degree and the behaviors of indoor personnel, so that the energy conservation and emission reduction are realized, and the indoor thermal comfort level is improved.

Description

Indoor heating and ventilation control method, system and medium
Technical Field
The invention relates to an indoor heating and ventilation control method, an indoor heating and ventilation control system and an indoor heating and ventilation control medium, in particular to computer vision and heating and ventilation air conditioning control, and belongs to the technical field of artificial intelligence and image processing.
Background
The energy consumption in the global architecture field is about 40% of the total energy consumption, and about half of the energy consumption is used for building air conditioning. The energy consumption in the building field must be reduced. The person window opening behavior is the most main factor influencing the heating and ventilation control, and under the condition that intelligent control is not used, the person window opening behavior frequently in the building can sharply increase the energy consumption and the operation cost.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an indoor heating and ventilation control method, an indoor heating and ventilation control system and a medium, which are used for controlling and adjusting the heating and ventilation system according to indoor environment parameters, window opening degree and behaviors of indoor personnel, so that energy conservation and emission reduction are realized, and indoor thermal comfort level is improved.
In one aspect, the invention provides an indoor heating and ventilation control method, which comprises the following steps:
acquiring indoor video data and environmental parameters;
inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating to obtain the indoor window opening degree according to the window segmentation mask images;
calculating a backlight stream sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight stream sequence and the human skeleton key point sequence into a trained indoor personnel behavior recognition model IDARM to obtain indoor personnel thermal comfort behaviors;
And determining an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters.
Further, the video data is acquired by a camera. The camera can be placed on the indoor top, ensures that it can clearly shoot the pictures of the indoor global and window, and measures the distance and angle between the camera and the detection window and the pitch angle, yaw angle and roll angle of the camera.
Specifically, the video acquired by the camera is sampled according to a certain frequency to acquire a corresponding video frame, and then the video frame is input into an indoor window segmentation model IWS-Net to carry out window segmentation.
Further, the environmental parameter is obtained by an indoor thermal comfort measuring instrument. The indoor thermal comfort measuring instrument mainly acquires indoor air temperature, relative humidity, air flow rate and carbon dioxide concentration.
Optionally, a cyclic full-pair domain converter RAFT is used to calculate a corresponding backlight stream sequence from an RGB video frame sequence acquired by a camera, and simultaneously a Mediapipe library is used to detect skeleton key points of each frame image in the video, so as to form a human skeleton key point sequence.
Further, the operation of calculating the inverse optical flow sequence is as follows:
(1) Selecting two adjacent frames of video frame sequence as marksAnd->And fills its width and height to multiples of 8;
(2) At the futureAnd->After the exchange, the reverse-light flow graph is obtained as the input of the pre-training cycle full-pair domain converter RAFT;
(3) Reversing the direction of the values in the backlight flow graph and repeating the step (1) until the video is finished.
Further, the operation of calculating the key points of the human skeleton in each frame is as follows:
(1) Separating the video frame sequence frame by frame;
(2) Detecting key points of human bones in each frame of image by using Mediapipe in Python;
(3) Storing the 33 human skeleton key points obtained in the step (2), and repeating the step (1) until the step (1) is finished.
As a further technical solution, the training of the indoor window segmentation model IWS-Net further includes:
collecting images of indoor windows, preprocessing the images and forming a training set;
inputting the training set into a constructed indoor window segmentation model IWS-Net for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor window segmentation model IWS-Net to obtain a trained indoor window segmentation model IWS-Net.
Further, after an image of an indoor window is acquired, labeling is carried out on the image by using labelme, and walls, windows and window edges in the image are segmented, and the rest parts are divided into background types. All data are proportioned into test and training sets.
As a further technical scheme, the operation process of the indoor window segmentation model IWS-Net includes:
the method comprises the steps of performing feature extraction on an input indoor window image by using a backbone network formed by 5 feature extraction modules in sequence, and storing the output of each feature extraction module;
the output of the last feature extraction module is subjected to background feature suppression and required segmentation part feature enhancement by using 3 attention modules which are connected in sequence;
and reconstructing the output of the last attention module and the output of each feature extraction module by using a reconstruction module to obtain mask images of walls, windows, window edges and the background.
The technical scheme adopts the IWS-Net deep learning model to divide the indoor window, and has the following characteristics: (1) The depth separable convolution and the cavity convolution calculation mode are adopted, so that the operation parameters are reduced while the receptive field is enlarged, and the calculation speed and the segmentation accuracy are improved; (2) The image features are extracted by adopting a multichannel convolution mode and utilizing different convolution kernels, so that the feature diversity is increased, the expression capacity of the model is improved, the risk of model overfitting is reduced, and the model training and reasoning process is accelerated; (3) The attention mechanism is adopted, the background information in the image is restrained, the characteristic information of the needed segmentation part is enhanced, and the attention degree of IWS-Net to the region of interest is improved, so that the accuracy and the robustness of segmentation are improved.
Further, the reconstruction module includes 5 upsampling modules, wherein the first upsampling moduleThe input of the module is made of +.>The output of the individual modules and->Output of the individual feature extraction module>Composition is prepared.
As a further technical solution, the method further includes:
after obtaining mask images of walls, windows, window edges and backgrounds, extracting window images, and sequentially carrying out filtering, threshold segmentation and opening operation processing on the extracted window images;
acquiring a plurality of minimum inscribed rectangles and vertex coordinates and areas of each rectangle on the processed window image;
and calculating the opening proportion of the sliding window according to the number of the acquired rectangles, the vertex coordinates and the area of each rectangle.
Specifically, windows in the image are extracted by using a mask of the windows, and Gaussian filtering is performed to remove noise; then threshold segmentation is utilized to generate a binary image; and then carrying out open operation on the binary image to remove the hair points on the binary image. Searching the minimum inscribed rectangle on the obtained image, counting the number of the rectangles, and obtaining the coordinates of four vertexes of the rectangles. And finally, calculating the opening proportion of the sliding window according to the number of the rectangles, the coordinates of the four vertexes and the area.
As a further technical solution, the method further includes:
After obtaining mask images of walls, windows, window edges and backgrounds, detecting by using Huo Fuxian to obtain a straight line at the upper end of the window edge and a straight line at the lower end of the window edge;
calculating the slope of the two straight lines, and calculating the included angle of the two straight lines according to the slope;
and obtaining the actual opening angle of the out-swinging window by combining the relative distance and angle between the shooting device and the window and the pitch angle, yaw angle and roll angle of the shooting device by using the deep neural network model DNN.
As a further technical solution, the training of the indoor personnel behavior recognition model IDARM further includes:
acquiring video data of the thermal comfort behaviors of indoor personnel, and constructing a training set;
inputting the training set into a constructed indoor personnel behavior recognition model IDARM for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor personnel behavior recognition model IDARM to obtain a trained indoor personnel behavior recognition model IDARM.
Further, constructing the training set includes video acquisition of the following 6 actions for the plurality of subjects: (1) sitting; (2) walking; (3) a hand fan; (4) shaking clothes; (5) hand rolling; (6) shoulder embracing, wherein each video is 3 to 5 seconds long, and the frame rate is 30FPS; the collected video was recorded according to 1: the scale of 1 is divided into training and test sets.
As a further technical solution, the operation process of the indoor personnel behavior recognition model IDARM includes:
respectively extracting the characteristics of the video frame sequence and the reverse light stream sequence to obtain video characteristics and reverse light stream characteristics;
the video features, the reverse light stream features and the position codes are subjected to data enhancement through an encoding module Encoder, and enhanced video features are obtained;
taking the enhanced video features, the backlight stream features and the human skeleton key point sequence as inputs of a decoding module Decoder, and outputting optical flow inquiry and content inquiry;
and carrying out multi-head linear change on the optical flow query and the content query, and obtaining the confidence corresponding to the thermal comfort behavior related action through the fully connected network and the Softmax function.
As a further technical solution, the method further includes:
constructing data tensorsWherein, the method comprises the steps of, wherein,Tis the temperature of the air, which is the temperature of the air,His the relative humidity of the water and,Vis the air flow rate and is used to control the air flow,Cis the concentration of carbon dioxide and the concentration of the carbon dioxide,Lis the degree of opening of the window and,Ais a behavior made by indoor personnel;
acquiring an optimal regulation strategy by using an indoor temperature and humidity regulation control algorithm according to the data tensor;
and adjusting the indoor heating, ventilation and air conditioning control system according to the optimal regulation strategy.
In another aspect, the present invention provides an indoor heating ventilation control system, comprising:
the acquisition module is used for acquiring indoor video data and environmental parameters;
the window opening degree calculating module is used for inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating the indoor window opening degree according to the window segmentation mask images;
the indoor personnel thermal comfort behavior recognition module is used for calculating a backlight flow sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight flow sequence and the human skeleton key point sequence into the trained indoor personnel behavior recognition model IDARM to obtain the indoor personnel thermal comfort behavior;
and the indoor heating and ventilation control module is used for determining an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters.
The invention also provides a computer readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the indoor heating ventilation control method.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, an IWS-Net model is trained by using indoor window image data, the division of walls, windows, window edges and backgrounds in indoor window images is realized, and the window opening degree is calculated by a sliding window opening proportion algorithm and an external flat-open window opening angle algorithm.
(2) According to the invention, the video frame sequence is used for calculating the inverse optical flow sequence and the human skeleton key point sequence, integrating the video frame sequence, the inverse optical flow sequence and the human skeleton key point sequence, and realizing the end-to-end recognition of 6 behaviors related to thermal comfort by indoor personnel through the indoor personnel behavior recognition model IDARM.
(3) According to the invention, the heating and ventilation system is controlled according to the optimal regulation and control strategy by the indoor temperature and humidity regulation and control algorithm, so that the thermal comfort level of personnel and the energy consumption of the building are balanced, and unnecessary energy waste is avoided, thereby realizing energy conservation, improving the production and life quality of residents, improving the reliability and the use efficiency of building equipment, reducing the failure rate and the maintenance cost of the equipment, and reducing the workload and the cost of equipment maintenance.
Drawings
FIG. 1 is a schematic flow chart of an indoor heating and ventilation control method in an embodiment of the invention;
FIG. 2 is a flowchart of a window opening detection algorithm according to an embodiment of the present invention;
FIG. 3 is a diagram of an overall network structure of an IWS-Net indoor window segmentation model in an embodiment of the invention;
FIG. 4 is a block diagram of a feature extraction module in an IWS-Net of an indoor window segmentation model in an embodiment of the invention;
FIG. 5 is a diagram illustrating an attention module in an IWS-Net indoor window segmentation model in accordance with an embodiment of the present invention;
FIG. 6 is a diagram of an up-sampling module in an IWS-Net of an indoor window segmentation model according to an embodiment of the present invention;
fig. 7 is an overall network structure diagram of an indoor personnel behavior recognition model IDARM in the embodiment of the present invention.
Description of the embodiments
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
As shown in fig. 1, a flow chart of an indoor heating and ventilation control method provided by the invention is that indoor thermal environment parameters are collected through an indoor thermal comfort measuring instrument, and an indoor global image is collected through a camera; the window opening degree and the thermal behaviors related to thermal comfort, which are made by indoor personnel, are respectively obtained by using a window opening detection algorithm and an indoor personnel behavior recognition algorithm; and finally, calculating an optimal regulation strategy through an indoor temperature and humidity regulation control algorithm, controlling an indoor heating, ventilation and air conditioning system, and balancing the thermal comfort of indoor personnel and the energy consumption of a building.
The core technology of the invention comprises a window opening detection algorithm, an indoor personnel behavior recognition algorithm and an indoor temperature and humidity adjustment control algorithm, and the working principle and the module function are specifically described below by combining with a legend.
1. Window opening detection algorithm
The window opening detection algorithm is divided into four parts: (1) an indoor window segmentation model IWS-Net; (2) sliding window opening proportion algorithm; (3) an external casement window opening angle algorithm; (4) window opening degree algorithm. The flow of the window opening detection algorithm is shown in fig. 2, and a mask image of a wall body, a window edge and a background is generated by utilizing an indoor window segmentation model IWS-Net; calculating the opening area and the proportion of the sliding window through a sliding window opening proportion algorithm; calculating the actual opening angle of the outer casement window through an outer casement window opening angle algorithm; and finally, calculating the actual opening degree of the two types of windows.
(1) Indoor window segmentation model IWS-Net
1) Principle of operation
The overall structure of the indoor window segmentation model IWS-Net is shown in FIG. 3, the input image is of a fixed size (the method adoptsInput size), IWS-Net is composed of a back bone, an attention module, and a reconstruction module.
Firstly, carrying out feature extraction on RGB images input by a network through a backlight, wherein the backlight is composed of 5 feature extraction modules, and in the process of carrying out feature extraction, the output of each module is stored and named as. The final output of the Backbone is then passed through 3 attention modules, the role of which is to suppress background information in the feature map, addingThe feature information of the required segmentation is strong. And finally, reconstructing the characteristic tensor after the attention module by a reconstruction module, and finally generating mask images of the wall body, the window edge and the background. The reconstruction module consists of 5 up-sampling modules and a depth separable convolution, wherein the input of each up-sampling module is the output of the last up-sampling module and the output of the corresponding feature extraction module of the backstone, and the depth separable convolution is to carry out channel reforming on the last up-sampling module so that the number of channels is equal to 4.
2) Feature extraction module
The specific structure of the feature extraction module is shown in fig. 4, and it is assumed that the input feature tensor isThe function of the depth separable convolution 1 is to reform the channels of the input characteristic tensor, the number of the channels of the characteristic tensor is kept unchanged after the depth separable convolution 1, and the depth separable convolution is recorded as +. >Then it can be expressed as the following formula:
wherein the method comprises the steps ofIs the convolution kernel size; />Is the step length; />To fill in the pixel size; />The number of channels after convolution output;the number of channels that are input for the feature map. />Calculated feature tensor for depth separable convolution, +.>For a high feature tensor,is the width of the feature tensor.
Then willDifferent features are extracted through convolution calculation of different receptive fields, and the size of the feature images is changed due to cavity convolution, so that up-sampling operation is needed, the sizes of the three feature images are the same, and finally the three feature images are spliced in the channel dimension.
Wherein,for the proportion of the number of holes to the size of the convolution kernel, < >>The output tensor after the splicing in the channel dimension is three characteristic tensors; />For sampling function +.>For the splicing function->. The size of the receptive field of the depth separable convolution 1 is 3, the receptive field of the cavity convolution 1 is 5, and the receptive field of the cavity convolution 2 is 7.
Finally, willThe downsampling operation is carried out, the size of the characteristic diagram is changed into half of the original size, and the channel number is changed into 2 times of the original size through the depth separable convolution 3, namely +.>
Wherein,for the feature tensor after downsampling, +.>
3)Backbone
The structure of the backlight is shown in FIG. 3, and the backlight is composed of 5 feature extraction modules, assuming that the input RGB image is Then->Then->The feature graphs output by the feature extraction modules are recorded asWherein->,/>,/>,/>For high, +.>Is the width of the input image. The output of each feature extraction module is saved and marked as +.>While the final output of the Backbone is +.>,/>A list of feature tensors extracted for the respective layers of the Backbone for storing +.>
4) Attention module
The attention module is structured as shown in FIG. 5, assuming that the input feature map is,/>. First flattening it to a one-dimensional tensor +.>Then it is subjected to a linear change, denoted +.>Then it can be expressed as:
wherein,for the output characteristic tensor after linear transformation, +.>
For characteristic diagramIs encoded by the coordinates of (a) feature map>The coordinates of each point of (2) can be expressed as +.>The position coding is to convert the coordinates into the number of channels +.>The specific formula of the one-dimensional tensor of (a) is as follows:
wherein,,/>,/>
the principle in the multi-headed self-attention mechanism is to divide the input data into a plurality of parts, each with an independent attention head. Each attention head calculates the weight associated with the input data and then sums the weights in a weighted manner to obtain a final output result. The three inputs required by the attention mechanism are respectively ,/>,/>Hereinafter abbreviated as->,/>,/>
The following is a calculation formula of the attention mechanism:
wherein,is->Number of channels of feature tensor.
In the moduleIs->;/>,/>All are->And->The sum in the channel dimension can be expressed in the form:
the formula for the multi-headed self-attention mechanism can be expressed in the following form:
wherein,the tensor is projected to another space of transformation matrix, and the output after multi-head self-attention mechanism is marked as +.>
Finally, willAnd->And after a linear fully connected layer, the final output of the entire attention module can be expressed in the following form:
5) Upsampling module
The up-sampling module is structured as shown in fig. 6, and the input of the up-sampling module is the output of the previous module and the output feature tensor of the corresponding feature extraction module in the backstene. The output of the upper layer module is recorded asThe feature tensor output by the corresponding feature extraction module of the Backbone is recorded as +.>
Assume that,/>The upsampling operation is performed after splicing it, and can be expressed as follows:
wherein,,/>. Tensor stitching is to stitch two feature tensors in the channel dimension, so the feature map size after stitching is unchanged, but the number of channels is doubled before. The purpose of the upsampling operation is to double the size of the input feature map.
The effect of the depth separable convolution 1 is to reduce the number of channels by half, i.e
Wherein,
channel attention is formed by global average pooling and linear fully connected layers:
wherein,representing global average pooling, the original +.>Global average pooling over the whole feature map,/->Therefore->
Spatial attention consists of channel average pooling and depth separable convolution:
wherein,representing channel mean pooling, the original +.>The averaging pooling is performed over the channels,therefore->
The depth separable convolution 2 functions identically to 1, usingRepresenting the computation of depth separable convolutions 1 and 2, the output of the overall module after the upsampling operation is therefore:
wherein,
6) Reconstruction module
The specific structure of the reconstruction module is shown in fig. 3, and the reconstruction module is composed of 5 upsampling modules and 1 depth separable convolution. The input of each up-sampling module is the output of the last up-sampling module and the output of the corresponding feature extraction module of the back, and the depth separable convolution is to perform channel reforming on the last up-sampling module so that the number of channels is equal to 4. Each channel corresponds to a wall, window edge, and background class in the image.
(2) Sliding window opening proportion algorithm
And inputting the indoor window image as an IWS-Net model to generate mask images of windows and window edges. And extracting windows in the image by using a window mask, performing Gaussian filtering, removing noise, and generating a binary image by using threshold segmentation. And (3) performing open operation on the binary image to remove the hair points on the binary image.
Searching the minimum inscribed rectangle in the denoised image, counting the number of the rectangles, and obtaining four vertex coordinates of the rectanglesThe coordinates of the upper left, upper right, lower left and lower right corners of the rectangle are represented respectively, and the number of rectangles is divided into X cases, discussed below:
1) The number of rectangles is 2, calculated for each rectangleAnd->,/>Respectively +.>The height value of the coordinates of the midpoints of the top and bottom line segments of a rectangle, wherein +.>Representing a different matrix of the matrix, in this case, the +.>. If->Or->Judging that the window is closed, otherwise, judging that the window is finishedThe window is fully opened with an opening ratio of 100%. Wherein->A threshold value set manually.
2) And if the number of the rectangles is 3, judging that the window is completely opened, wherein the opening proportion of the window is 100%.
3) The number of rectangles is 4, and two rectangles on the left and right sides are calculatedAnd. If- >Judging that the window opening is left, wherein the opening ratio of the windows isOtherwise, judging that the window opening is right, wherein the opening ratio of the window is +.>. Wherein (1)>The heights of the coordinates of the midpoints of the top line segments of the leftmost rectangle and the rightmost rectangle respectively; />,/>The areas of the leftmost rectangle and the rightmost rectangle respectively; />Is->The area of the individual matrices is determined,in this case, the +.>
(3) Algorithm for opening angle of outward casement window
And inputting the indoor window image as an IWS-Net model to generate mask images of windows and window edges. And respectively extracting mask images of the window and the window edge, reducing noise and burrs of the images by using an image open operation, then performing threshold segmentation, and simultaneously detecting the two images by using Huo Fuxian to obtain a straight line at the upper end of the window edge and oblique lines at the lower end of the window. Respectively obtaining the slopes of two straight lines,/>And intercept->,/>
The included angle of the two straight lines is calculated through the slope of the two straight lines, and the calculation formula is as follows:
a DNN model is built, the input tensor length is set to be 5, the number of hidden layers is 3, the number of hidden layer neurons is 10, and the number of output layer neurons is 1.
Opening angle of window in imageRelative distance of camera and window->The included angle between the camera and the front surface of the window >The pitch angle of the camera->Yaw angle->And roll angle->Splicing to form tensor->
From by DNNIn practice window opening angle +.>
(4) Window opening degree algorithm
(2) The calculated sliding window opening ratio can represent the degree of opening of the window, and (3) the calculated actual opening angle of the out-swinging casement window cannot represent the degree of opening of the window, so that the actual opening angle is divided by the maximum opening angle to obtain the opening degree of the out-swinging casement window.
Wherein,is the opening degree of the window, and the value range is +.>
2. Indoor personnel behavior recognition algorithm
And the indoor personnel behavior algorithm calculates an inverse optical flow sequence and a human skeleton key point sequence through an input video frame sequence, and takes the inverse optical flow sequence and the human skeleton key point sequence as the input of an indoor personnel behavior recognition model IDARM to perform end-to-end indoor personnel behavior recognition. The method mainly can identify 6 most commonly done actions related to the thermal comfort of indoor personnel, namely (1) sitting; (2) walking; (3) a hand fan; (4) shaking clothes; (5) hand rolling; (6) shoulder-holding. Wherein (1), (2) represents thermally neutral behavior and the remaining 4 represent thermally uncomfortable behavior, more specifically, (3), (4) represents thermally perceived behavior, (5), (6) represents cold perceived behavior.
Next, the generation of the backlight stream sequence and the human skeleton key point sequence and the indoor personnel behavior recognition model IDARM will be described in detail.
(1) Backlight flow sequence and human skeleton key point sequence
Assume that the input video frame sequence isCo-ordination of->And (5) a frame image. In order to acquire a human skeleton key point sequence, the Mediapipe library in Python is used for detecting the human skeleton key points of each frame, 33 skeleton key points can be detected in each frame of image, and the coordinates of each point are->Wherein->
Thus, the sequence of human skeletal keypoints can be expressed asWhereinRepresents->Coordinates of all skeletal keypoints in the frame.
To obtain the reverse light flow sequence, a pre-trained RAFT model is used to calculate the reverse light flow. Selecting two adjacent frames in a video frame sequenceAnd->. The normal optical flow calculation is shown as follows:
in order to obtain the inverse optical flow, two input adjacent frames should be exchanged, and meanwhile, the speed direction of the optical flow is inverted, and the specific formula is as follows:
therefore, the reverse optical flow sequence can be expressed as
(2) Indoor personnel behavior recognition model IDARM
In order to make the lengths of the backlight stream sequence, the video frame sequence and the skeleton key point sequence identical, the first frame video image and the skeleton key point coordinates detected by the first frame video image are removed. The overall network structure diagram of the indoor personnel behavior recognition model iderm is shown in fig. 7.
Firstly, respectively carrying out feature extraction on a video frame sequence and a backlight stream sequence through two backbones to obtain video featuresAnd reverse light flow characteristics->. The human behavior is then identified using the Encoder module and the Decoder module.
The Encoder module will,/>And position coding enhancing video feature by self-attention module +.>. Then +.>Normalization is carried out with the addition of unenhanced +.>Finally, FFN is used for dimension mapping. N Encoder modules are stacked, where N is 6. Wherein the i-th Encoder module may be expressed in the form of:
wherein,is a position code->Is a self-attention module->Is a function of the normalization of the values of the functions,is a fully connected layer, the final output of the N Encoder layer stacks is denoted +.>
The Decoder module further refines the general query tensor into optical flow queriesAnd content queryWherein->. Providing corresponding coordinates using a sequence of skeletal keypoints>Firstly, it is coordinate-coded, its coding principle is identical to that of position coding so as to obtain +.>
First, the optical flow query is passed through the self-attention module, then the deformable attention module is used to update the optical flow query, so the update process can be expressed as follows:
Wherein,representing a deformable attention module.
And updating the sum of the updated optical flow query and the content query through the self-attention module to obtain a mixed query, and finally updating the content query by using the deformable attention module. The update procedure can thus be expressed in the form:also, as with the Encoder, the Encoder also needs to stack N, which is set to 6 in the present invention. The final output of the optical flow query and the content query of the Decoder are recorded as +.>And->
Finally, the linear change of the multiple heads is adoptedAnd->Fusion was performed to calculate confidence scores for the 6 behaviors, which can be expressed in the following form: />
Wherein,representative is->Confidence score for each behavior in a frame, in order to determine the behavior represented by the input video sequence, it is therefore necessary to apply a confidence score to +.>Adding along the first dimension and mapping it to between 0 and 1, it is therefore necessary to calculate the confidence score of each behavior by means of a Softmax function, which can be expressed in particular in the form of:
wherein,is the confidence score of the final output. Selecting the behavior with the maximum confidence score +.>As a model output.
3. Indoor temperature and humidity regulation control algorithm
Assume thatIs the air temperature->Is relative humidity, & gt >Is the air flow rate->Is the concentration of carbon dioxide,/->Is the opening degree of the window, < >>Is a behavior made by indoor personnel. Behavior made for indoor personnel>Dividing the input data into three types, wherein (1) and (2) are marked as a first type, and the output score is 0; (3) (4) the second class, the output fraction being-1; (5) and (6) is marked as a third class, and the output score is 1. The output score is noted +.>
When personnel are active in the room, the temperature in the room is first kept at 25 ℃, the humidity is kept at 60% and the air flow rate is kept at 0.25m/s. For regulation of temperature, humidity and air flow rate, nominal values will be used and adjusted according to influencing factors.
First, the window opening degree is classified into three types: (1)Consider a window closed state; (2)/>It is considered necessary to increase the indoor air flow rate, but it is not intended to change the indoor temperature; (3)/>The indoor temperature and humidity are considered to be adjusted to be the same as those of the outdoor.
Thus, the special condition is the degree of windowingWhen the temperature, humidity and air conditioning system will be shut down.
The regulation strategy of indoor temperature is as follows:
wherein,the rated value of temperature regulation, namely the change amount of indoor temperature regulation, is set as 2 in the invention;the value to which the current room temperature should be adjusted is output for the strategy.
The significance of this strategy is that when the window is closed, the temperature is regulated according to the category of personnel actions. When the window is opened, the window opening degree needs to be regulated and controlled according to personnel behaviors and the window opening degree, and the larger the window opening degree is, the variable needs to be increased in order to maintain the indoor temperature.
The regulation strategy of the indoor air flow rate is as follows:
wherein,the rated value of air flow rate regulation, namely the change amount of indoor air flow rate regulation, is set to be 0.005 in the invention; />Outputting a value to which the current indoor air flow rate should be adjusted for the strategy; />Is a logical function, and outputs a 1 when the contents in brackets are true, and outputs a 0 otherwise.
The significance of this strategy is: when the window is closed, the air flow rate increases when the carbon dioxide concentration is too high. And when the window is opened, the greater the window opening degree is, the greater the air flow rate which the window can bring is, so that the change amount of the air flow rate can be reduced.
The regulation strategy of indoor humidity is as follows:
wherein,the invention is set to 0.05 for the rated value of indoor humidity regulation, namely the change amount of indoor humidity regulation; />Outputting a value to which the current indoor humidity should be adjusted for the strategy; / >This variable, which is the indoor humidity, is the humidity value measured at the previous time minus the humidity value measured at the current time.
The significance of this strategy is: when the window is closed, the indoor humidity is kept to be 0.6, and when the window is opened, if the indoor humidity is reduced, the indoor humidity is increased on the basis of 0.6, and the average value of the indoor humidity in a period of time is kept to be 0.6. On the contrary, the reduction is performed on the basis of 0.6.
The invention also provides an indoor heating and ventilation control system, which comprises:
the acquisition module is used for acquiring indoor video data and environmental parameters;
the window opening degree calculating module is used for inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating the indoor window opening degree according to the window segmentation mask images;
the indoor personnel thermal comfort behavior recognition module is used for calculating a backlight flow sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight flow sequence and the human skeleton key point sequence into the trained indoor personnel behavior recognition model IDARM to obtain the indoor personnel thermal comfort behavior;
and the indoor heating and ventilation control module is used for calculating an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters.
The system may be implemented with reference to embodiments of the foregoing method.
Further, the video data is acquired by a camera. The camera can be placed on the indoor top, ensures that it can clearly shoot the pictures of the indoor global and window, and measures the distance and angle between the camera and the detection window and the pitch angle, yaw angle and roll angle of the camera.
Specifically, the video acquired by the camera is sampled according to a certain frequency to acquire a corresponding video frame, and then the video frame is input into an indoor window segmentation model IWS-Net to carry out window segmentation.
Further, the environmental parameter is obtained by an indoor thermal comfort measuring instrument. The indoor thermal comfort measuring instrument mainly acquires indoor air temperature, relative humidity, air flow rate and carbon dioxide concentration.
Optionally, a cyclic full-pair domain converter RAFT is used to calculate a corresponding backlight stream sequence from an RGB video frame sequence acquired by a camera, and simultaneously a Mediapipe library is used to detect skeleton key points of each frame image in the video, so as to form a human skeleton key point sequence.
Further, the operation of calculating the inverse optical flow sequence is as follows:
(1) Selecting two adjacent frames of video frame sequence as marks And->And fills its width and height to multiples of 8;
(2) At the futureAnd->After the exchange, the reverse-light flow graph is obtained as the input of the pre-training cycle full-pair domain converter RAFT;
(3) Reversing the direction of the values in the backlight flow graph and repeating the step (1) until the video is finished.
Further, the operation of calculating the key points of the human skeleton in each frame is as follows:
(1) Separating the video frame sequence frame by frame;
(2) Detecting key points of human bones in each frame of image by using Mediapipe in Python;
(3) Storing the 33 human skeleton key points obtained in the step (2), and repeating the step (1) until the step (1) is finished.
The training of the indoor window segmentation model IWS-Net further comprises:
collecting images of indoor windows, preprocessing the images and forming a training set;
inputting the training set into a constructed indoor window segmentation model IWS-Net for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor window segmentation model IWS-Net to obtain a trained indoor window segmentation model IWS-Net.
Further, after an image of an indoor window is acquired, labeling is carried out on the image by using labelme, and walls, windows and window edges in the image are segmented, and the rest parts are divided into background types. All data are proportioned into test and training sets.
The operation process of the indoor window segmentation model IWS-Net comprises the following steps:
the method comprises the steps of performing feature extraction on an input indoor window image by using a backbone network formed by 5 feature extraction modules in sequence, and storing the output of each feature extraction module;
the output of the last feature extraction module is subjected to background feature suppression and required segmentation part feature enhancement by using 3 attention modules which are connected in sequence;
and reconstructing the output of the last attention module and the output of each feature extraction module by using a reconstruction module to obtain mask images of walls, windows, window edges and the background.
Further, the reconstruction module includes 5 upsampling modules, wherein the first upsampling moduleThe input of the module is that ofThe output of the individual modules and->Output of the individual feature extraction module>Composition is prepared.
The window opening degree calculating module is further used for executing the following operations:
after obtaining mask images of walls, windows, window edges and backgrounds, extracting window images, and sequentially carrying out filtering, threshold segmentation and opening operation processing on the extracted window images;
acquiring the vertex coordinates of a plurality of minimum inscribed rectangles and each rectangle on the processed window image;
And calculating the opening proportion of the sliding window according to the number of the acquired rectangles, the vertex coordinates and the area of each rectangle.
Specifically, windows in the image are extracted by using a mask of the windows, and Gaussian filtering is performed to remove noise; then threshold segmentation is utilized to generate a binary image; and then carrying out open operation on the binary image to remove the hair points on the binary image. Searching the minimum inscribed rectangle on the obtained image, counting the number of the rectangles, and obtaining the coordinates of four vertexes of the rectangles. And finally, calculating the opening proportion of the sliding window according to the number of the rectangles, the coordinates of the four vertexes and the area.
The window opening degree calculating module is further used for executing the following operations:
after obtaining mask images of walls, windows, window edges and backgrounds, detecting by using Huo Fuxian to obtain a straight line at the upper end of the window edge and a straight line at the lower end of the window edge;
calculating the slope of the two straight lines, and calculating the included angle of the two straight lines according to the slope;
and obtaining the actual opening angle of the out-swinging window by combining the relative distance and angle between the shooting device and the window and the pitch angle, yaw angle and roll angle of the shooting device by using the deep neural network model DNN.
The training of the indoor personnel behavior recognition model IDARM further comprises the following steps:
Acquiring video data of the thermal comfort behaviors of indoor personnel, and constructing a training set;
inputting the training set into a constructed indoor personnel behavior recognition model IDARM for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor personnel behavior recognition model IDARM to obtain a trained indoor personnel behavior recognition model IDARM.
Further, constructing the training set includes video acquisition of the following 6 actions for the plurality of subjects: (1) sitting; (2) walking; (3) a hand fan; (4) shaking clothes; (5) hand rolling; (6) shoulder embracing, wherein each video is 3 to 5 seconds long, and the frame rate is 30FPS; the collected video was recorded according to 1: the scale of 1 is divided into training and test sets.
The operation process of the indoor personnel behavior recognition model IDARM comprises the following steps:
respectively extracting the characteristics of the video frame sequence and the reverse light stream sequence to obtain video characteristics and reverse light stream characteristics;
the video features, the reverse light stream features and the position codes are subjected to data enhancement through an encoding module Encoder, and enhanced video features are obtained;
taking the enhanced video features, the backlight stream features and the human skeleton key point sequence as inputs of a decoding module Decoder, and outputting optical flow inquiry and content inquiry;
And carrying out multi-head linear change on the optical flow query and the content query, and obtaining the confidence corresponding to the action related to the thermal comfort behavior through the fully connected network and the Softmax function.
The indoor personnel thermal comfort behavior recognition module is also used for executing the following operations:
constructing data tensorsWherein, the method comprises the steps of, wherein,Tis the temperature of the air, which is the temperature of the air,His the relative humidity of the water and,Vis the air flow rate and is used to control the air flow,Cis the concentration of carbon dioxide and the concentration of the carbon dioxide,Lis the degree of opening of the window and,Ais a behavior made by indoor personnel;
acquiring an optimal regulation strategy by using an indoor temperature and humidity regulation control algorithm according to the data tensor;
and adjusting the indoor heating, ventilation and air conditioning control system according to the optimal regulation strategy.
According to an aspect of the present description, there is provided a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the indoor heating control method.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.

Claims (10)

1. An indoor heating and ventilation control method is characterized by comprising the following steps:
acquiring indoor video data and environmental parameters, wherein the video data is acquired through a shooting device, the shooting device is arranged at the indoor top, so that the indoor global and window pictures can be shot clearly, and the distance and angle between the indoor global and window pictures and the detection window as well as the pitch angle, yaw angle and roll angle of the indoor global and window pictures are measured;
inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating to obtain the indoor window opening degree according to the window segmentation mask images; when the indoor window opening degree is calculated, the actual opening angle of the outward casement window is obtained by utilizing the deep neural network model DNN and combining the relative distance and angle between the shooting device and the window and the pitch angle, yaw angle and roll angle of the shooting device;
calculating a backlight stream sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight stream sequence and the human skeleton key point sequence into a trained indoor personnel behavior recognition model IDARM to obtain indoor personnel thermal comfort behaviors;
and determining an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters.
2. The indoor heating and ventilation control method according to claim 1, wherein the training of the indoor window segmentation model IWS-Net comprises:
collecting images of indoor windows, preprocessing the images and forming a training set;
inputting the training set into a constructed indoor window segmentation model IWS-Net for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor window segmentation model IWS-Net to obtain a trained indoor window segmentation model IWS-Net.
3. The indoor heating and ventilation control method according to claim 2, wherein the operation process of the indoor window segmentation model IWS-Net comprises:
the method comprises the steps of performing feature extraction on an input indoor window image by using a backbone network formed by 5 feature extraction modules in sequence, and storing the output of each feature extraction module;
the output of the last feature extraction module is subjected to background feature suppression and feature enhancement of a part to be segmented by using 3 attention modules which are connected in sequence;
and reconstructing the output of the last attention module and the output of each feature extraction module by using a reconstruction module to obtain mask images of walls, windows, window edges and the background.
4. A method of controlling indoor heating ventilation according to claim 3, further comprising:
after obtaining mask images of walls, windows, window edges and backgrounds, extracting window images, and sequentially carrying out filtering, threshold segmentation and opening operation processing on the extracted window images;
acquiring the vertex coordinates of a plurality of minimum inscribed rectangles and each rectangle on the processed window image;
and calculating the opening proportion of the sliding window according to the number of the acquired rectangles, the vertex coordinates and the area of each rectangle.
5. A method of controlling indoor heating ventilation according to claim 3, further comprising:
after obtaining mask images of walls, windows, window edges and backgrounds, detecting by using Huo Fuxian to obtain a straight line at the upper end of the window edge and a straight line at the lower end of the window edge;
calculating the slope of the two straight lines, and calculating the included angle of the two straight lines according to the slope;
and obtaining the actual opening angle of the out-swinging window by combining the relative distance and angle between the shooting device and the window and the pitch angle, yaw angle and roll angle of the shooting device by using the deep neural network model DNN.
6. The indoor heating and ventilation control method according to claim 1, wherein the training of the indoor personnel behavior recognition model IDARM comprises:
Acquiring video data of the thermal comfort behaviors of indoor personnel, and constructing a training set;
inputting the training set into a constructed indoor personnel behavior recognition model IDARM for training to obtain model parameters meeting the precision requirement;
and loading the model parameters into an indoor personnel behavior recognition model IDARM to obtain a trained indoor personnel behavior recognition model IDARM.
7. The indoor heating and ventilation control method according to claim 6, wherein the operation process of the indoor personnel behavior recognition model IDARM comprises:
respectively extracting the characteristics of the video frame sequence and the reverse light stream sequence to obtain video characteristics and reverse light stream characteristics;
the video features, the reverse light stream features and the position codes are subjected to data enhancement through an encoding module Encoder, and enhanced video features are obtained;
taking the enhanced video features, the backlight stream features and the human skeleton key point sequence as inputs of a decoding module Decoder, and outputting optical flow inquiry and content inquiry;
and carrying out multi-head linear change on the optical flow query and the content query, and obtaining the confidence corresponding to the action related to the thermal comfort behavior through the fully connected network and the Softmax function.
8. The indoor heating and ventilation control method according to claim 1, further comprising:
Constructing a data tensor d= [ T, H, V, C, L, A ], wherein T is air temperature, H is relative humidity, V is air flow rate, C is carbon dioxide concentration, L is window opening degree, and A is behavior made by indoor personnel;
acquiring an optimal regulation strategy by using an indoor temperature and humidity regulation control algorithm according to the data tensor;
and adjusting the indoor heating, ventilation and air conditioning control system according to the optimal regulation strategy.
9. An indoor heating ventilation control system, comprising:
the acquisition module is used for acquiring indoor video data and environmental parameters, the video data are acquired through the shooting device, the shooting device is arranged at the indoor top, the indoor global and window pictures can be shot clearly, and the distance and angle between the indoor global and window pictures and the detection window as well as the pitch angle, yaw angle and roll angle of the indoor global and window pictures are measured;
the window opening degree calculating module is used for inputting the video data into a trained indoor window segmentation model IWS-Net to obtain window segmentation mask images, and calculating the indoor window opening degree according to the window segmentation mask images; when the indoor window opening degree is calculated, the actual opening angle of the outward casement window is obtained by utilizing the deep neural network model DNN and combining the relative distance and angle between the shooting device and the window and the pitch angle, yaw angle and roll angle of the shooting device;
The indoor personnel thermal comfort behavior recognition module is used for calculating a backlight flow sequence and a human skeleton key point sequence corresponding to the video data, and inputting the video data, the backlight flow sequence and the human skeleton key point sequence into the trained indoor personnel behavior recognition model IDARM to obtain the indoor personnel thermal comfort behavior;
and the indoor heating and ventilation control module is used for determining an indoor heating and ventilation control adjustment strategy according to the obtained indoor window opening degree, the indoor personnel thermal comfort behavior and the environmental parameters.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, wherein the computer program, when executed by a processor, implements the steps of the indoor heating ventilation control method according to any one of claims 1 to 8.
CN202311644580.3A 2023-12-04 2023-12-04 Indoor heating and ventilation control method, system and medium Active CN117346285B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311644580.3A CN117346285B (en) 2023-12-04 2023-12-04 Indoor heating and ventilation control method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311644580.3A CN117346285B (en) 2023-12-04 2023-12-04 Indoor heating and ventilation control method, system and medium

Publications (2)

Publication Number Publication Date
CN117346285A CN117346285A (en) 2024-01-05
CN117346285B true CN117346285B (en) 2024-03-26

Family

ID=89367016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311644580.3A Active CN117346285B (en) 2023-12-04 2023-12-04 Indoor heating and ventilation control method, system and medium

Country Status (1)

Country Link
CN (1) CN117346285B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1319900A1 (en) * 2001-12-13 2003-06-18 Lg Electronics Inc. Air conditioner and method for controlling the same
JP2007024416A (en) * 2005-07-19 2007-02-01 Daikin Ind Ltd Air conditioner
CN107102022A (en) * 2017-03-07 2017-08-29 青岛海尔空调器有限总公司 Thermal environment Comfort Evaluation method based on thermal manikin
CN109489226A (en) * 2018-12-27 2019-03-19 厦门天翔园软件科技有限公司 A kind of air-conditioning indoor energy-saving policy management system and air conditioning control method
CN109948472A (en) * 2019-03-04 2019-06-28 南京邮电大学 A kind of non-intrusion type human thermal comfort detection method and system based on Attitude estimation
DE102018204789A1 (en) * 2018-03-28 2019-10-02 Robert Bosch Gmbh Method for climate control in rooms
CN112303861A (en) * 2020-09-28 2021-02-02 山东师范大学 Air conditioner temperature adjusting method and system based on human body thermal adaptability behavior
CN113435508A (en) * 2021-06-28 2021-09-24 中冶建筑研究总院(深圳)有限公司 Method, device, equipment and medium for detecting opening state of glass curtain wall opening window
CN115457056A (en) * 2022-09-20 2022-12-09 北京威高智慧科技有限公司 Skeleton image segmentation method, device, equipment and storage medium
CN115540286A (en) * 2022-08-16 2022-12-30 青岛海尔空调器有限总公司 Air conditioning system control method and device, air conditioning system and storage medium
CN115682368A (en) * 2022-10-31 2023-02-03 西安建筑科技大学 Non-contact indoor thermal environment control system and method based on reinforcement learning
CN116258705A (en) * 2023-03-16 2023-06-13 湖南大学 Window opening detection method based on image processing
CN117053378A (en) * 2023-05-18 2023-11-14 苏州科技大学 Intelligent heating ventilation air conditioner regulating and controlling method based on user portrait

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5208785B2 (en) * 2009-01-28 2013-06-12 株式会社東芝 VIDEO DISPLAY DEVICE, VIDEO DISPLAY DEVICE CONTROL METHOD, AND CONTROL PROGRAM

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1319900A1 (en) * 2001-12-13 2003-06-18 Lg Electronics Inc. Air conditioner and method for controlling the same
JP2007024416A (en) * 2005-07-19 2007-02-01 Daikin Ind Ltd Air conditioner
CN107102022A (en) * 2017-03-07 2017-08-29 青岛海尔空调器有限总公司 Thermal environment Comfort Evaluation method based on thermal manikin
DE102018204789A1 (en) * 2018-03-28 2019-10-02 Robert Bosch Gmbh Method for climate control in rooms
CN109489226A (en) * 2018-12-27 2019-03-19 厦门天翔园软件科技有限公司 A kind of air-conditioning indoor energy-saving policy management system and air conditioning control method
CN109948472A (en) * 2019-03-04 2019-06-28 南京邮电大学 A kind of non-intrusion type human thermal comfort detection method and system based on Attitude estimation
CN112303861A (en) * 2020-09-28 2021-02-02 山东师范大学 Air conditioner temperature adjusting method and system based on human body thermal adaptability behavior
CN113435508A (en) * 2021-06-28 2021-09-24 中冶建筑研究总院(深圳)有限公司 Method, device, equipment and medium for detecting opening state of glass curtain wall opening window
CN115540286A (en) * 2022-08-16 2022-12-30 青岛海尔空调器有限总公司 Air conditioning system control method and device, air conditioning system and storage medium
CN115457056A (en) * 2022-09-20 2022-12-09 北京威高智慧科技有限公司 Skeleton image segmentation method, device, equipment and storage medium
CN115682368A (en) * 2022-10-31 2023-02-03 西安建筑科技大学 Non-contact indoor thermal environment control system and method based on reinforcement learning
CN116258705A (en) * 2023-03-16 2023-06-13 湖南大学 Window opening detection method based on image processing
CN117053378A (en) * 2023-05-18 2023-11-14 苏州科技大学 Intelligent heating ventilation air conditioner regulating and controlling method based on user portrait

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于热舒适性理论的智能节能窗控制策略研究;孙旭灿;潘玉勤;常建国;王放;;科技通报(08);第58-53页 *
孙旭灿 ; 潘玉勤 ; 常建国 ; 王放 ; .基于热舒适性理论的智能节能窗控制策略研究.科技通报.2020,(08),第58-53页. *

Also Published As

Publication number Publication date
CN117346285A (en) 2024-01-05

Similar Documents

Publication Publication Date Title
CN112766160B (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN111627019A (en) Liver tumor segmentation method and system based on convolutional neural network
CN111882560B (en) Lung parenchyma CT image segmentation method based on weighted full convolution neural network
JP2023550844A (en) Liver CT automatic segmentation method based on deep shape learning
CN114066871B (en) Method for training new coronal pneumonia focus area segmentation model
CN110543906A (en) Skin type automatic identification method based on data enhancement and Mask R-CNN model
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN116228639A (en) Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN114565755B (en) Image segmentation method, device, equipment and storage medium
CN113643297B (en) Computer-aided age analysis method based on neural network
CN114862800A (en) Semi-supervised medical image segmentation method based on geometric consistency constraint
CN117346285B (en) Indoor heating and ventilation control method, system and medium
CN112906675A (en) Unsupervised human body key point detection method and system in fixed scene
CN116311472A (en) Micro-expression recognition method and device based on multi-level graph convolution network
CN115018729B (en) Content-oriented white box image enhancement method
CN115690115A (en) Lung medical image segmentation method based on reconstruction pre-training
CN116012903A (en) Automatic labeling method and system for facial expressions
CN114913164A (en) Two-stage weak supervision new crown lesion segmentation method based on super pixels
CN110276391B (en) Multi-person head orientation estimation method based on deep space-time conditional random field
CN113822175A (en) Virtual fitting image generation method based on key point clustering drive matching
CN116385837B (en) Self-supervision pre-training method for remote physiological measurement based on mask self-encoder
Zhai et al. Multi-focus image fusion via interactive transformer and asymmetric soft sharing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant