CN113647920A - Method and device for reading vital sign data in monitoring equipment - Google Patents

Method and device for reading vital sign data in monitoring equipment Download PDF

Info

Publication number
CN113647920A
CN113647920A CN202111225368.4A CN202111225368A CN113647920A CN 113647920 A CN113647920 A CN 113647920A CN 202111225368 A CN202111225368 A CN 202111225368A CN 113647920 A CN113647920 A CN 113647920A
Authority
CN
China
Prior art keywords
vital sign
sign data
image
monitoring
monitoring device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111225368.4A
Other languages
Chinese (zh)
Inventor
冯健
陈栋栋
常培佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Medcare Digital Engineering Co ltd
Original Assignee
Qingdao Medcare Digital Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Medcare Digital Engineering Co ltd filed Critical Qingdao Medcare Digital Engineering Co ltd
Priority to CN202111225368.4A priority Critical patent/CN113647920A/en
Publication of CN113647920A publication Critical patent/CN113647920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Fuzzy Systems (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Pulmonology (AREA)
  • Optics & Photonics (AREA)
  • Vascular Medicine (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application relates to a method and a device for reading vital sign data in monitoring equipment, wherein the method comprises the following steps: acquiring a monitoring video of the monitoring device through an accessed monitoring device video signal; determining an identification object area corresponding to vital sign data in each frame of monitored image of the monitored video; splitting the character image of the identification object area to obtain a target image; and calling an Optical Character Recognition (OCR) prediction model obtained by pre-training, and quantizing the target image into vital sign data through the OCR prediction model. The monitoring video acquisition system can acquire the monitoring video of the monitoring equipment in real time, identify and quantify vital sign data in each frame of monitoring image, further support independent and centralized acquisition of video signals of the monitoring equipment, be suitable for various conditions and ensure reliable identification rate.

Description

Method and device for reading vital sign data in monitoring equipment
Technical Field
The invention relates to the field of medicine, in particular to a method and a device for reading vital sign data in monitoring equipment.
Background
The monitor monitors the patient mainly by collecting vital sign data. At present, monitors of various manufacturers adopt own data transmission protocols, no uniform protocol standard exists, most of the manufacturers do not disclose protocol contents, some monitors even do not support data transmission, and the problems cause that the vital sign data are difficult to obtain from the monitors, so that the vital sign data of patients cannot be observed, recorded and filed in a centralized manner.
Based on this, how to read vital sign data from monitors with different protocol standards is a problem to be solved urgently in the technical field.
Disclosure of Invention
The embodiment of the invention provides a method and a device for reading vital sign data in monitoring equipment, which are used for at least solving the problem.
In a first aspect, the present invention provides a method for reading vital sign data in a monitoring device, where the method for reading vital sign data in a monitoring device includes:
acquiring a monitoring video of the monitoring device through an accessed monitoring device video signal;
determining an identification object area corresponding to vital sign data in each frame of monitored image of the monitored video;
splitting the character image of the identification object area to obtain a target image;
calling an Optical Character Recognition (OCR) prediction model obtained by pre-training, and quantizing the target image into vital sign data through the OCR prediction model;
the construction method of the OCR prediction model comprises the following steps:
constructing a neural network model, wherein the structure of the neural network model sequentially comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a full-connection layer, a second pooling layer and an output layer;
acquiring a monitoring training image of monitoring equipment;
splitting a character image of the identification object region of the monitored training image to obtain a training image;
and classifying the training images, and training by using the neural network model to obtain the OCR prediction model.
Optionally, the acquiring the monitoring video of the monitoring device through the accessed monitoring device video signal includes:
and acquiring the monitoring video of the monitoring equipment by adopting a real-time streaming protocol (RTSP) through the accessed monitoring equipment video signal.
Optionally, the splitting the character image of the recognition object region includes:
carrying out binarization processing on the identification object area;
determining the boundary of the character image in the binarized identification object area;
and splitting the character image in the boundary by adopting a vertical projection mode.
Optionally, the determining the boundary of the character image in the binarized recognition object area includes:
traversing the binarized identification object region line by line;
determining first white pixel points in four side directions in the binarized identification object area;
and determining the boundary of the character image according to the first white pixel points in the four side directions.
Optionally, the splitting the character image in the boundary by using the vertical projection manner includes:
storing the pixel number of white pixel points in each row of pixels of the character image in the boundary;
drawing a projection image in a vertical direction according to the number of pixels in each column;
determining the segmentation points of each character image according to the gray value of each column of the projected image;
and splitting the character image in the boundary according to the segmentation point.
Optionally, the invoking of an Optical Character Recognition (OCR) prediction model obtained by pre-training, before the quantizing, by the OCR prediction model, the target image into the vital sign data, includes:
and constructing the OCR prediction model.
Optionally, performing a convolution operation on the input layer by the first convolution layer using a convolution kernel; performing down-sampling treatment on the first pooling layer to obtain a first characteristic diagram; performing a convolution operation on the first feature map by the second convolution layer using n convolution kernels; processing by adopting a second pooling layer to obtain n second characteristic graphs; outputting the second characteristic diagram to an output layer by adopting an activation function through the full connection layer; where n is determined by the number of characters to be recognized.
Optionally, the method for reading vital sign data in a monitoring device further includes:
the first pooling layer and the second pooling layer are pooled in an exponential weighting mode accumulation activation mode, a plurality of check point files generated in multiple Epoch processes are averaged in training to obtain a new check point file, and an overfitting resistant secondary training model is obtained.
In a second aspect, the present invention provides an apparatus for reading vital sign data in a monitoring device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the computer program, when being executed by the processor, realizes the steps of the method for reading vital sign data in a monitoring device as set forth in any of the above.
By applying the technical scheme, the monitoring video of the monitoring equipment can be collected in real time, and the vital sign data in each frame of monitoring image can be identified and quantized, so that the independent and centralized collection of the video signals of the monitoring equipment can be supported, the monitoring equipment is suitable for various conditions, and the reliable identification rate is ensured.
Drawings
Fig. 1 is a flow chart of a method of reading vital sign data in a monitoring device according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the following drawings and specific embodiments, it being understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Example one
An embodiment of the present invention provides a method for reading vital sign data in a monitoring device, as shown in fig. 1, the method for reading vital sign data in a monitoring device includes:
s101, acquiring a monitoring video of monitoring equipment through an accessed monitoring equipment video signal; specifically, a video signal of the monitor can be accessed to a video acquisition device, and each frame of image of the monitor is acquired through the video acquisition device;
s102, determining an identification object area corresponding to vital sign data in each frame of monitored image of the monitored video; specifically, the identification object area can be determined based on the distribution position of the vital sign data (monitoring data) on the screen image of the monitoring device;
s103, splitting the character image of the identification object area to obtain a target image;
and S104, calling an Optical Character Recognition (OCR) prediction model obtained by pre-training, and quantizing the target image into vital sign data through the OCR prediction model.
The vital sign data (monitoring data) includes 1 or more of heart rate, blood pressure, blood oxygen saturation, respiration, and body temperature.
According to the embodiment of the invention, the monitoring video of the monitoring device is acquired through the accessed monitoring device video signal, so that the identification object area corresponding to the vital sign data is determined in each frame of monitoring image of the monitoring video, the character image splitting is carried out on the identification object area to obtain the target image, and then the optical character recognition OCR prediction model obtained by pre-training is called to quantize the target image into the vital sign data, so that the monitoring video of the monitoring device can be acquired in real time, the vital sign data in each frame of monitoring image is recognized and quantized, and further the independent and centralized acquisition of the monitoring device video signal can be supported, the monitoring device video signal is suitable for various conditions, and the reliable recognition rate is ensured.
In some embodiments, when passing through the accessed monitoring device video signal, the real-time streaming protocol RTSP may be used to collect the monitoring video of the monitoring device, so as to realize the centralized observation and processing function of the monitoring data.
In some embodiments, the character image splitting the recognition object region includes:
carrying out binarization processing on the identification object area; determining the boundary of the character image in the binarized identification object area; and splitting the character image in the boundary by adopting a vertical projection mode.
Wherein, the determining the boundary of the character image in the binarized recognition object area may include:
traversing the binarized identification object region line by line; determining first white pixel points in four side directions in the binarized identification object area; and determining the boundary of the character image according to the first white pixel points in the four side directions. The four side directions of the recognition object area may be leftmost, rightmost, topmost and bottommost.
The splitting the character image in the boundary by using the vertical projection mode may include:
storing the pixel number of white pixel points in each row of pixels of the character image in the boundary; drawing a projection image in a vertical direction according to the number of pixels in each column; determining the segmentation points of each character image according to the gray value of each column of the projected image; and splitting the character image in the boundary according to the segmentation point.
In detail, the identification object area is determined based on the distribution position of the monitored data in the screen image, and each frame of monitored image is saved. Splitting characters of vital sign data in an identification object area of each frame of monitored image, for example, performing graying, binarization preprocessing and black edge removing processing on the monitored image: and removing the black boundary, traversing the binarized image line by line, finding the first white pixel point at the leftmost side, the rightmost side, the topmost side and the bottommost side, and determining the boundary of the digital area. And then, segmenting the binary image by adopting a vertical projection mode, and disassembling the binary image into single numbers one by one, wherein the method specifically comprises the following steps:
firstly, defining an array for storing the pixel number of white pixels in each row of pixels, drawing a projected image according to the pixel number, the gray value of the white pixels and the positions of the white pixels in the array, wherein the projected image reflects the number of pixels in a digital area in the vertical direction, and only each row of the projected image needs to be judged to find out a segmentation point. And finally, carrying out black edge removing processing on the segmented digital image to obtain a target image (training data is obtained when an OCR prediction model is trained).
In some embodiments, the invoking a pre-trained optical character recognition, OCR, prediction model, and before the quantizing, by the OCR prediction model, the target image into vital sign data includes:
constructing a neural network model, wherein the structure of the neural network model sequentially comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a full-connection layer, a second pooling layer and an output layer;
acquiring a monitoring training image of monitoring equipment;
splitting a character image of the identification object region of the monitored training image to obtain a training image;
and classifying the training images, and training by using the neural network model to obtain the OCR prediction model.
Optionally, performing a convolution operation on the input layer by the first convolution layer using a convolution kernel; performing down-sampling treatment on the first pooling layer to obtain a first characteristic diagram; performing a convolution operation on the first feature map by the second convolution layer using n convolution kernels; processing by adopting a second pooling layer to obtain n second characteristic graphs; outputting the second characteristic diagram to an output layer by adopting an activation function through the full connection layer; specifically, in the training process using the neural network model, performing convolution operation on the input layer by using 3 convolution kernels with the size of 5 × 5 through the first convolution layer; down-sampling the first pooled layers with the first pooling size of 2x 2 to obtain 3 first feature maps with the size of 14 x 14; performing a convolution operation on the first feature map by the second convolution layer using n convolution kernels of size 5x 5; processing by adopting a second pooling layer to obtain n second characteristic maps of 5 × 5; outputting the second characteristic diagram to an output layer by adopting an activation function through the full connection layer; where n is determined by the number of characters to be recognized.
For example, a customized neural network model is constructed by using Tensorflow, and a network structure comprises an input layer, two convolution layers, 1 full-connection layer and an output layer. INPUT layer INPUT INPUTs 32X1 images. The first convolution layer CONV1 still has huge calculation amount due to the fact that the network model is finally operated on the CPU, and the convolution kernel of 5x5 is used, and here, the convolution kernel of 3 × 3 is used for performing convolution operation on the input layer, and the step size stride is 1. The second convolutional layer CONV2 uses 13 convolutional kernel operations with the size of 3X3, the fully connected layer is connected with the pooling layer POOL2 layer, and passes through the activation function (Sigmoid) function to the output layer, and finally 11 classifications are obtained, corresponding to the numbers 0-9 and the background classification. Wherein the background category comprises an isolated symbol between the image background and the data. After classifying the collected training images, 1000 epochs are trained by using the above self-defined neural network model. Averaging a plurality of checkpoint files generated in the Epoch process, namely averaging all parameters in the model, and creating a new checkpoint file after averaging to obtain a better result of the overfitting-resistant secondary training model compared with all models on the previous training path.
The first pooling layer and the second pooling layer of the present invention are pooled in an exponentially weighted cumulative activation manner. This pooling retains more information in the downsampling activation map than a series of other pooling methods, with finer downsampling resulting in better classification accuracy. Optionally, the first pooling layer and the second pooling layer are pooled in an exponential weighting mode for accumulation activation, and multiple check point files generated in multiple Epoch processes are averaged in training to obtain a new check point file, so as to obtain an overfitting-resistant secondary training model.
Let R be the set of indices corresponding to the activation of the two-dimensional spatial region under consideration. Define | R | = k for pooled kernel of size k × k2. The output of the pooling operation is
Figure DEST_PATH_IMAGE001
Corresponding gradient
Figure DEST_PATH_IMAGE002
And (4) showing.
The pooling approach utilizes a maximum approximation R within the activation region, each activation a having an index iiApplying a weight WiThe weight WiCalculated as the ratio of the natural index of the activation to the sum of the natural indices of all activations in the neighborhood R:
Figure DEST_PATH_IMAGE003
the weights are used as a non-linear transformation together with the corresponding activation values. Higher activation dominates more than lower activation. Since most pooling operations are performed in a high-dimensional feature space, highlighting activation with greater effect is a more balanced approach than simply selecting the maximum, which can significantly reduce the overall regional feature strength.
The output value of the pooling operation is obtained by summing the criteria for all weighted activations within the kernel neighborhood R:
Figure DEST_PATH_IMAGE004
and classifying the collected training images, and then training by using the self-defined neural network model, wherein in order to prevent overfitting, when the accuracy of the verification set is not improved for 10 times continuously, the training is ended.
Therefore, graying, binarization preprocessing, black edge removing processing and digital splitting can be carried out on the identification object region based on the OCR prediction model, and the individual digital images after splitting are called the OCR prediction model for prediction to obtain a prediction result (quantized vital sign data). And displaying and storing the prediction result.
In some embodiments, the method for reading vital sign data in a monitoring device further comprises: and converting the vital sign data according to the hygienic information exchange standard HL7 protocol. Querying and transmission of data is supported by packaging the monitored data into a standardized HL7 protocol.
The embodiment of the invention breaks through the hardware limitation of the monitoring equipment and realizes the standardized transmission of the vital sign data of the monitoring equipment. And a user-defined neural network model is adopted, so that the accuracy of the extraction of the vital sign data is ensured, and the compatibility of the system is ensured.
Example two
The embodiment of the invention provides a device for reading vital sign data in monitoring equipment, which comprises: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the computer program, when being executed by the processor, realizes the steps of the method for reading vital sign data in a monitoring device according to any one of the embodiments.
EXAMPLE III
An embodiment of the present invention provides a computer-readable storage medium, in which a program for reading vital sign data in a monitoring device is stored, and when the program for reading vital sign data in the monitoring device is executed by a processor, the steps of the method for reading vital sign data in a monitoring device according to any one of the embodiments are implemented.
In the concrete implementation process of the second embodiment to the third embodiment, reference may be made to the first embodiment, and corresponding technical effects are achieved.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method for reading vital sign data in a monitoring device, the method comprising:
acquiring a monitoring video of the monitoring device through an accessed monitoring device video signal;
determining an identification object area corresponding to vital sign data in each frame of monitored image of the monitored video;
splitting the character image of the identification object area to obtain a target image;
calling an Optical Character Recognition (OCR) prediction model obtained by pre-training, and quantizing the target image into vital sign data through the OCR prediction model;
the construction method of the OCR prediction model comprises the following steps:
constructing a neural network model, wherein the structure of the neural network model sequentially comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a full-connection layer, a second pooling layer and an output layer;
acquiring a monitoring training image of monitoring equipment;
splitting a character image of the identification object region of the monitored training image to obtain a training image;
and classifying the training images, and training by using the neural network model to obtain the OCR prediction model.
2. The method of claim 1, wherein the acquiring the monitored video of the monitoring device via the accessed monitoring device video signal comprises:
and acquiring the monitoring video of the monitoring equipment by adopting a real-time streaming protocol (RTSP) through the accessed monitoring equipment video signal.
3. The method of reading vital sign data in a monitoring device of claim 1, wherein the character image splitting of the identified subject region comprises:
carrying out binarization processing on the identification object area;
determining the boundary of the character image in the binarized identification object area;
and splitting the character image in the boundary by adopting a vertical projection mode.
4. The method for reading vital sign data of a monitoring device according to claim 3, wherein the determining the boundary of the character image in the binarized recognition object area comprises:
traversing the binarized identification object region line by line;
determining first white pixel points in four side directions in the binarized identification object area;
and determining the boundary of the character image according to the first white pixel points in the four side directions.
5. The method according to claim 3, wherein the splitting the character image within the boundary in the vertical projection manner comprises:
storing the pixel number of white pixel points in each row of pixels of the character image in the boundary;
drawing a projection image in a vertical direction according to the number of pixels in each column;
determining the segmentation points of each character image according to the gray value of each column of the projected image;
and splitting the character image in the boundary according to the segmentation point.
6. The method of reading vital sign data from a monitoring device of claim 1, wherein the input layer is convolved by the first convolution layer using a convolution kernel; performing down-sampling treatment on the first pooling layer to obtain a first characteristic diagram; performing a convolution operation on the first feature map by the second convolution layer using n convolution kernels; processing by adopting a second pooling layer to obtain n second characteristic graphs; outputting the second characteristic diagram to an output layer by adopting an activation function through the full connection layer; where n is determined by the number of characters to be recognized.
7. The method of reading vital sign data of a monitoring device according to claim 1, further comprising:
the first pooling layer and the second pooling layer are pooled in an exponential weighting mode accumulation activation mode, a plurality of check point files generated in multiple Epoch processes are averaged in training to obtain a new check point file, and an overfitting resistant secondary training model is obtained.
8. An apparatus for reading vital sign data of a monitoring device, the apparatus comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the computer program, when being executed by the processor, realizes the steps of the method of reading vital sign data in a monitoring device as set forth in any one of claims 1-7.
CN202111225368.4A 2021-10-21 2021-10-21 Method and device for reading vital sign data in monitoring equipment Pending CN113647920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111225368.4A CN113647920A (en) 2021-10-21 2021-10-21 Method and device for reading vital sign data in monitoring equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111225368.4A CN113647920A (en) 2021-10-21 2021-10-21 Method and device for reading vital sign data in monitoring equipment

Publications (1)

Publication Number Publication Date
CN113647920A true CN113647920A (en) 2021-11-16

Family

ID=78494769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111225368.4A Pending CN113647920A (en) 2021-10-21 2021-10-21 Method and device for reading vital sign data in monitoring equipment

Country Status (1)

Country Link
CN (1) CN113647920A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746463A (en) * 2023-12-20 2024-03-22 脉得智能科技(无锡)有限公司 Sign information identification method, system and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678916A (en) * 2013-12-12 2014-03-26 电子科技大学 Universal method and device for collecting custody data of custody instruments automatically
CN107516096A (en) * 2016-06-15 2017-12-26 阿里巴巴集团控股有限公司 A kind of character identifying method and device
CN109363642A (en) * 2012-12-18 2019-02-22 赛诺菲-安万特德国有限公司 The medical device of the optical pickup device of data is transmitted and received using data optical
CN110490195A (en) * 2019-08-07 2019-11-22 桂林电子科技大学 A kind of water meter dial plate Recognition of Reading method
CN111555939A (en) * 2020-04-28 2020-08-18 中国人民解放军总医院第四医学中心 Monitor information acquisition system
CN111860317A (en) * 2020-07-20 2020-10-30 青岛特利尔环保集团股份有限公司 Boiler operation data acquisition method, system, equipment and computer medium
CN112200160A (en) * 2020-12-02 2021-01-08 成都信息工程大学 Deep learning-based direct-reading water meter reading identification method
CN112270317A (en) * 2020-10-16 2021-01-26 西安工程大学 Traditional digital water meter reading identification method based on deep learning and frame difference method
CN112307919A (en) * 2020-10-22 2021-02-02 福州大学 Improved YOLOv 3-based digital information area identification method in document image
CN112348007A (en) * 2020-10-21 2021-02-09 杭州师范大学 Optical character recognition method based on neural network
CN112336305A (en) * 2020-09-30 2021-02-09 贵阳朗玛信息技术股份有限公司 Method and device for collecting sign data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109363642A (en) * 2012-12-18 2019-02-22 赛诺菲-安万特德国有限公司 The medical device of the optical pickup device of data is transmitted and received using data optical
CN103678916A (en) * 2013-12-12 2014-03-26 电子科技大学 Universal method and device for collecting custody data of custody instruments automatically
CN107516096A (en) * 2016-06-15 2017-12-26 阿里巴巴集团控股有限公司 A kind of character identifying method and device
CN110490195A (en) * 2019-08-07 2019-11-22 桂林电子科技大学 A kind of water meter dial plate Recognition of Reading method
CN111555939A (en) * 2020-04-28 2020-08-18 中国人民解放军总医院第四医学中心 Monitor information acquisition system
CN111860317A (en) * 2020-07-20 2020-10-30 青岛特利尔环保集团股份有限公司 Boiler operation data acquisition method, system, equipment and computer medium
CN112336305A (en) * 2020-09-30 2021-02-09 贵阳朗玛信息技术股份有限公司 Method and device for collecting sign data
CN112270317A (en) * 2020-10-16 2021-01-26 西安工程大学 Traditional digital water meter reading identification method based on deep learning and frame difference method
CN112348007A (en) * 2020-10-21 2021-02-09 杭州师范大学 Optical character recognition method based on neural network
CN112307919A (en) * 2020-10-22 2021-02-02 福州大学 Improved YOLOv 3-based digital information area identification method in document image
CN112200160A (en) * 2020-12-02 2021-01-08 成都信息工程大学 Deep learning-based direct-reading water meter reading identification method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李晨瑄等: "智能化舰船要害检测、轨迹预测与位姿估计算法", 《北京航空航天大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746463A (en) * 2023-12-20 2024-03-22 脉得智能科技(无锡)有限公司 Sign information identification method, system and electronic equipment

Similar Documents

Publication Publication Date Title
Vijayalakshmi Deep learning approach to detect malaria from microscopic images
KR102058884B1 (en) Method of analyzing iris image for diagnosing dementia in artificial intelligence
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
WO2023070447A1 (en) Model training method, image processing method, computing processing device, and non-transitory computer readable medium
CN112418329A (en) Cervical OCT image classification method and system based on multi-scale textural feature fusion
JP6945253B2 (en) Classification device, classification method, program, and information recording medium
CN111445457B (en) Network model training method and device, network model identification method and device, and electronic equipment
US20230005138A1 (en) Lumbar spine annatomical annotation based on magnetic resonance images using artificial intelligence
CN115359066B (en) Focus detection method and device for endoscope, electronic device and storage medium
US20230177698A1 (en) Method for image segmentation, and electronic device
US11721023B1 (en) Distinguishing a disease state from a non-disease state in an image
CN112949654A (en) Image detection method and related device and equipment
CN111325709A (en) Wireless capsule endoscope image detection system and detection method
KR102179090B1 (en) Method for medical diagnosis by using neural network
US11042772B2 (en) Methods of generating an encoded representation of an image and systems of operating thereof
CN113647920A (en) Method and device for reading vital sign data in monitoring equipment
CN113592769A (en) Abnormal image detection method, abnormal image model training method, abnormal image detection device, abnormal image model training device and abnormal image model training medium
CN113822846A (en) Method, apparatus, device and medium for determining region of interest in medical image
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
Shen et al. Multicontext multitask learning networks for mass detection in mammogram
de Araújo et al. Automated detection of segmental glomerulosclerosis in kidney histopathology
EP4318497A1 (en) Training method for training artificial neural network for determining breast cancer lesion area, and computing system performing same
CN114049315A (en) Joint recognition method, electronic device, storage medium, and computer program product
CN113256556A (en) Image selection method and device
WO2019183712A1 (en) Methods of generating an encoded representation of an image and systems of operating thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211116

RJ01 Rejection of invention patent application after publication