CN111860317A - Boiler operation data acquisition method, system, equipment and computer medium - Google Patents

Boiler operation data acquisition method, system, equipment and computer medium Download PDF

Info

Publication number
CN111860317A
CN111860317A CN202010699244.9A CN202010699244A CN111860317A CN 111860317 A CN111860317 A CN 111860317A CN 202010699244 A CN202010699244 A CN 202010699244A CN 111860317 A CN111860317 A CN 111860317A
Authority
CN
China
Prior art keywords
image
area
operation data
boiler operation
hot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010699244.9A
Other languages
Chinese (zh)
Inventor
白彧
梅宁
李瑞波
刘思杰
郭成科
孙云国
金福
闫蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Trier Technology Co.,Ltd.
Original Assignee
Qingdao Clear Environmental Protection Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Clear Environmental Protection Group Co ltd filed Critical Qingdao Clear Environmental Protection Group Co ltd
Priority to CN202010699244.9A priority Critical patent/CN111860317A/en
Publication of CN111860317A publication Critical patent/CN111860317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a boiler operation data acquisition method, a system, equipment and a computer readable storage medium, which are used for acquiring a target image containing boiler operation data; determining an image hot area only containing boiler operation data in a target image; segmenting an image hot area in the target image to obtain a hot area image; and identifying the hot area image to obtain boiler operation data. According to the method and the device, boiler operation data in the target image are not acquired by means of an OPC (optical proximity correction) protocol, but the image hot area in the target image is determined firstly, then the image hot area in the target image is segmented to obtain the hot area image, and finally the hot area image is identified to obtain the boiler operation data without point location communication data and without the need of a user to perform screening and removing operations, so that the acquisition efficiency of the boiler operation data can be improved.

Description

Boiler operation data acquisition method, system, equipment and computer medium
Technical Field
The present application relates to the field of data acquisition technologies, and more particularly, to a method, system, device, and computer medium for acquiring boiler operation data.
Background
Currently, in the boiler industry, when boiler operation data needs to be acquired, a common mode is to log in a Distributed Control System (DCS) configuration management System through an administrator account and a password, issue boiler operation data acquired by the DCS through an OPC host, establish a database after receiving the boiler operation data through an OPC slave machine configured in the same network segment, and upload the boiler operation data to a cloud data space through a public network by an OPC client, so as to store and further apply the boiler operation data.
However, data acquisition through the opc communication protocol requires point location communication data information, and communication data often includes a large number of process calculation values and other point location information such as a power distribution system, which all need technical staff to perform screening and elimination, so that the acquisition process of boiler operation data is complex and the acquisition efficiency is low.
In summary, how to improve the acquisition efficiency of the boiler operation data is a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The application aims to provide a boiler operation data acquisition method which can solve the technical problem of how to improve the acquisition efficiency of boiler operation data to a certain extent. The application also provides a boiler operation data acquisition system, equipment and a computer readable storage medium.
In order to achieve the above purpose, the present application provides the following technical solutions:
a boiler operation data acquisition method, comprising:
acquiring a target image containing boiler operation data;
determining an image hot area only containing the boiler operation data in the target image;
segmenting the image hot area in the target image to obtain a hot area image;
and identifying the hot area image to obtain the boiler operation data.
Preferably, the determining that the target image only contains the image hot area of the boiler operation data includes:
identifying a region to be acquired in the target image based on a contour matching method;
and carrying out point location area division on the area to be acquired to obtain the image hot area.
Preferably, the identifying a region to be acquired in the target image based on the contour matching method includes:
matching the target image with a preset template image to obtain a matching area, wherein the template image represents the shape characteristics of the area to be acquired;
Calculating the matching degree of the matching area and the template image based on a normalized squared difference matching method;
determining the matching area with the highest matching degree as the area to be acquired;
the point location area division is performed on the area to be acquired to obtain the image hot area, and the point location area division comprises the following steps:
converting the RGB image of the area to be acquired into an HSV image to obtain a conversion area map;
performing median filtering on the conversion area map to obtain a filtering area map;
acquiring HSV ranges of red, yellow and green colors of preset boiler operation data;
finding out an image area consistent with the HSV range in the filtering area map;
and removing non-digital areas in the image area to obtain the image hot area.
Preferably, the removing the non-digital area in the image area to obtain the image hot area includes:
screening the data arrangement sequence of the image areas, and determining the image areas which accord with a preset arrangement sequence as the image hot areas;
and/or cutting out a symbol area in the image area to obtain the image hot area;
and/or determining an image area in the image area according with a preset data length as the image hot area.
Preferably, the determining that the target image only contains the image hot area of the boiler operation data includes:
acquiring a preset image with the same specification as the target image, wherein the preset image comprises a preset labeling area of the image hot area;
and carrying out region division on the target image according to the marked region to obtain the image hot region.
Preferably, the image hot area includes an area formed by pixel values of four boundaries, i.e., upper, lower, left and right boundaries.
Preferably, the identifying the hot zone image to obtain the boiler operation data includes:
carrying out sharpening processing on the hot area image to obtain a sharpened image;
converting the sharpened image into a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
carrying out character segmentation on the binary image based on a projection character method to obtain a character segmentation image;
recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result;
and performing character combination on the character recognition result based on the character segmentation sequence to obtain the boiler operation data.
Preferably, the sharpening process on the hot area image to obtain a sharpened image includes:
performing edge sharpening on the hot area image based on the 5 x 5 array convolution kernel to obtain a sharpened image;
the binarization processing of the gray level image to obtain a binarized image comprises the following steps:
carrying out binarization processing on the gray level image based on a large law method to obtain a binarized image;
the character segmentation is carried out on the binary image based on a projection character method to obtain a character segmentation image, and the method comprises the following steps:
black pixel accumulation statistics in the longitudinal coordinate direction and the transverse coordinate direction are respectively carried out on the binary image, and pixel distribution histograms in the vertical direction and the horizontal direction are obtained;
and carrying out character segmentation on the binary image based on the pixel gaps of the pixel distribution histogram to obtain the character segmentation image.
Preferably, the neural network model comprises a model built by a Google Net addition-3 network structure, the output of the neural network model is 12 probability values, and the 12 probability values respectively represent the probability that the input of the neural network model is 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 or decimal point or negative sign;
The method for recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result comprises the following steps:
identifying the character segmentation graph based on the neural network model to obtain a probability value corresponding to the character segmentation graph;
and determining the character recognition result based on the probability value corresponding to the character segmentation graph.
Preferably, after the character segmentation graph is recognized based on the trained neural network model and a character recognition result is obtained, the method further includes:
determining a target segmentation graph which cannot be identified or is identified wrongly in the character segmentation graph according to the character identification result;
retraining the neural network model based on the target segmentation graph.
A boiler operational data acquisition system comprising:
the first acquisition module is used for acquiring a target image containing boiler operation data;
the first determining module is used for determining an image hot area only containing the boiler operation data in the target image;
the first segmentation module is used for segmenting the image hot area in the target image to obtain a hot area image;
and the first identification module is used for identifying the hot zone image to obtain the boiler operation data.
A boiler operation data acquisition apparatus comprising:
a memory for storing a computer program;
a processor for implementing the steps of the boiler operation data acquisition method as described in any one of the above when the computer program is executed.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the boiler operation data acquisition method as set forth in any one of the preceding claims.
The application provides a boiler operation data acquisition method, which comprises the steps of obtaining a target image containing boiler operation data; determining an image hot area only containing boiler operation data in a target image; segmenting an image hot area in the target image to obtain a hot area image; and identifying the hot area image to obtain boiler operation data. According to the method and the device, boiler operation data in the target image are not acquired by means of an OPC (optical proximity correction) protocol, but the image hot area in the target image is determined firstly, then the image hot area in the target image is segmented to obtain the hot area image, and finally the hot area image is identified to obtain the boiler operation data without point location communication data and without the need of a user to perform screening and removing operations, so that the acquisition efficiency of the boiler operation data can be improved. The boiler operation data acquisition system, the boiler operation data acquisition equipment and the computer readable storage medium solve the corresponding technical problems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a first flowchart of a method for collecting operational data of a boiler according to an embodiment of the present disclosure;
FIG. 2 is a style diagram of a template image;
FIG. 3 is a stylistic view of a region to be collected corresponding to FIG. 2;
FIG. 4 is another stylistic view of a template image;
FIG. 5 is a view of an image region corresponding to FIG. 4;
FIG. 6 is a second flowchart of a method for collecting operational data of a boiler according to an embodiment of the present application;
FIG. 7 is a sharpened image obtained by sharpening a hot zone image characterizing boiler slurry supply flow hot zone data having a value of 0.16 according to the method of the present application;
FIG. 8 is a gray scale image converted from the sharpened image shown in FIG. 7;
FIG. 9 is a binarized image resulting from the conversion of the grayscale image shown in FIG. 8;
FIG. 10 is a histogram of pixel distribution obtained from the binarized image shown in FIG. 9;
FIG. 11 is a character segmentation view corresponding to FIG. 10;
FIG. 12 is a block diagram of a CNN neural network;
FIG. 13 is a diagram of a convolution process;
FIG. 14 is a diagram of a pooling process;
FIG. 15 is a schematic diagram of a convolutional layer of the Google Net inclusion-3 network architecture;
FIG. 16 is a digital '5' training set;
FIG. 17 is a schematic structural diagram of a boiler operation data acquisition system according to the present application;
FIG. 18 is a schematic structural diagram of a boiler operation data acquisition device according to an embodiment of the present application;
fig. 19 is another schematic structural diagram of a boiler operation data acquisition device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a first flowchart of a boiler operation data acquisition method according to an embodiment of the present application.
The boiler operation data acquisition method provided by the embodiment of the application can comprise the following steps:
step S101: a target image containing boiler operating data is acquired.
In practical application, a target image may be obtained first, for example, a target image including boiler operation data acquired by the DCS system may be obtained first, and both a process of obtaining the target image by the DCS system and a pattern of the target image may be determined according to actual needs.
In a specific application scenario, when a target image acquired by the DCS is acquired, the KVM virtual host can be connected with each monitoring host of the monitor station through an HDMI video cable, the images of the hosts can be switched randomly according to requirements without any influence on the images, and the target image is acquired through screenshot.
Step S102: and determining the image hot area only containing the boiler operation data in the target image.
In practical application, after a target image which is acquired by the DCS and contains boiler operation data is obtained, an image hot area which only contains the boiler operation data in the target image can be determined, and therefore independent boiler operation data can be directly obtained from the image hot area subsequently. It should be noted that in order to facilitate the distinction of the individual boiler operation data, it may be set that one image hot zone contains only one boiler operation data, etc.
Step S103: and segmenting the image hot area in the target image to obtain a hot area image.
In practical application, after the image hot area only containing the boiler operation data in the target image is determined, in order to facilitate acquisition of the boiler operation data, the image hot area in the target image can be segmented to obtain hot area images only containing the boiler operation data.
Step S104: and identifying the hot area image to obtain boiler operation data.
In practical application, after the image hot area in the target image is segmented to obtain the hot area image, the hot area image can be identified to obtain boiler operation data. It should be noted that in a specific application scenario, after the hot zone image is identified, it is possible to obtain only a simple value without the type of the boiler operation data represented by the value, and at this time, it is also necessary to additionally obtain the type of the boiler operation data so as to combine the value and the represented type of the boiler operation data into complete boiler operation data. The parameter type of boiler operation data can be confirmed according to actual need in this application, for example can include the temperature of original emission and exhanst gas outlet, pressure, velocity of flow, dust, SO2, O2, NOX, ammonia escape, boiler furnace burning data etc. wherein, boiler furnace burning data can include upper portion in the furnace and the temperature, the pressure of export again, each thermal power equipment temperature pressure of furnace tail duct etc..
The application provides a boiler operation data acquisition method, which comprises the steps of obtaining a target image containing boiler operation data; determining an image hot area only containing boiler operation data in a target image; segmenting an image hot area in the target image to obtain a hot area image; and identifying the hot area image to obtain boiler operation data. According to the method and the device, boiler operation data in the target image are not acquired by means of an OPC (optical proximity correction) protocol, but the image hot area in the target image is determined firstly, then the image hot area in the target image is segmented to obtain the hot area image, and finally the hot area image is identified to obtain the boiler operation data without point location communication data and without the need of a user to perform screening and removing operations, so that the acquisition efficiency of the boiler operation data can be improved.
According to the boiler operation data acquisition method provided by the embodiment of the application, when the image hot zone only containing boiler operation data in the target image is determined, the area to be acquired in the target image can be identified based on the contour matching method; and performing point location area division on the area to be acquired to obtain an image hot area.
In practical application, when the region to be acquired in the target image is identified based on the contour matching method, the method may include: matching the target image with a preset template image to obtain a matching area, wherein the template image represents the shape characteristics of the area to be acquired; calculating the matching degree of the matching area and the template image based on a normalized squared difference matching method; and determining the matching area with the highest matching degree as the area to be acquired.
For convenience of understanding, assuming that the boiler operation data is boiler furnace combustion data, the template image may be in the form shown in fig. 2, and when the matching degree between the matching region and the template image is calculated based on a normalized squared difference matching method, the template image and the target image may be subjected to sliding matching to obtain the matching region, and in this process, the matching degree between the matching region and the template image is calculated and calculated based on the pixel position, and the calculation formula may be:
Figure BDA0002592420070000071
wherein, T represents a template image, and x 'and y' represent pixel coordinates in the template image; i represents a matching region, R (x, y) is a metric matrix, and R (x, y) represents the matching degree of the pixel position represented in (x, y); the resulting pattern of the region to be acquired may be as shown in fig. 3. It should be noted that in fig. 2, the template image only includes the boiler furnace, however, in a specific application scenario, the template image may also include other features besides the boiler furnace, for example, the template image may be the boiler furnace and cyclone separator template shown in fig. 4, at this time, the target image may be matched according to the template image shown in fig. 4 to obtain the image area shown in fig. 5, and then the target image may be matched according to the template image shown in fig. 2 to obtain the area to be acquired shown in fig. 3, and the like.
In practical application, when point location area division is carried out on an area to be acquired to obtain an image hot area, an RGB image of the area to be acquired can be converted into an HSV image to obtain a conversion area map; carrying out median filtering on the conversion area graph to obtain a filtering area graph; acquiring HSV ranges of red, yellow and green colors of preset boiler operation data; in the filtering area graph, finding out an image area consistent with the HSV range; non-digital areas in the image area are removed to obtain an image hot area.
It should be noted that the RGB color scheme applied to RGB images is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing them on each other, RGB is a color representing three channels of red, green and blue, and this standard includes almost all colors that can be perceived by human vision; HSV (Hue, Saturation, Value) according to HSV images is a color space created by a.r. smith in 1978 according to the intuitive properties of color, also called a hexagonal cone Model (Hexcone Model), in which the parameters of color are: hue (H), saturation (S), lightness (V); the basic idea of median filtering is to replace the gray value of a pixel point with the median of the gray value of the neighborhood of the pixel point, so that the surrounding pixel values are close to the real value, thereby eliminating isolated noise points and retaining the edge details of an image while filtering out impulse noise and salt and pepper noise.
For convenience of understanding, boiler operation data is still assumed to be boiler furnace combustion data, and images acquired by a DCS system in practice show that the colors of the boiler furnace combustion data in a target image are red, green and yellow, point location data are transversely arranged, so that the colors of the boiler furnace combustion data in an area to be acquired are also red, green and yellow, and the point location data are transversely arranged, so that the area to be acquired can be converted into HSV images from RGB images to obtain a conversion area image; carrying out median filtering on the conversion area graph to obtain a filtering area graph; acquiring HSV ranges of red, yellow and green colors of preset characterization boiler operation data, wherein the acquired HSV ranges can be shown in table 1, Hmin represents the minimum value of hue, Hmax represents the maximum value of hue, Smin represents the minimum value of saturation, Smax represents the maximum value of saturation, Vmin represents the minimum value of brightness, and Vmax represents the maximum value of brightness; in the filtering area graph, finding out an image area consistent with the HSV range; non-digital areas in the image area are removed to obtain an image hot area.
TABLE 1 HSV Range of boiler furnace Combustion data
Red colour Green colour Yellow colour
Hmin
0 35 10
Hmax 10 77 45
Smin 100 110 100
Smax 255 255 255
Vmin 100 106 100
Vmax 255 255 255
In practical application, the step of removing the non-digital area in the image area to obtain the image hot area may specifically be: screening the data arrangement sequence of the image areas, and determining the image areas which accord with the preset arrangement sequence as image hot areas; and/or cutting out a symbol area in the image area to obtain an image hot area; and/or determining an image area in the image area which accords with the preset data length as an image hot area.
Still supposing that the boiler operation data is boiler furnace combustion data, in the image collected by the DCS system, the data displayed in green color contains steam pocket, however, only the data is arranged horizontally, so the data arrangement sequence screening can be carried out on the image area displayed in green color, and the image area arranged horizontally is used as an image hot area; correspondingly, the tail area of the image area displayed in the yellow color contains a symbol, and the symbol area in the image area displayed in the yellow color can be cut off to obtain an image hot area; in addition, the image area displayed in red color includes the furnace flame color, however, the data length difference between the furnace flame color and the boiler furnace combustion data is large, so that the image area in the image area displayed in red color, which conforms to the preset data length, can be determined as the image hot area, for example, the image area with the data length less than 100 pixels is determined as the image hot area.
In the boiler operation data acquisition method provided in the embodiment of the present application, since the size of the image acquired by the DCS system is not changed, and the display positions of various boiler operation parameters on the image are not changed, and only the specific values of the boiler operation parameters on the image are changed, the display positions of the boiler operation parameters in the image acquired by the DCS system can be determined in advance, the image hot area of the target image can be determined directly and quickly according to the display positions, that is, the step of determining the image hot area only containing the boiler operation data in the target image can be specifically: acquiring a preset image with the same specification as the target image, wherein the preset image comprises a preset image hot area labeling area; and carrying out region division on the target image according to the marked region to obtain an image hot region.
In practical applications, the style of the labeled region may be determined according to actual needs, for example, the labeled region may be a closed region combined by lines, or may be a region composed of pixel values of four boundaries, i.e., upper, lower, left, and right, etc., it should be noted that the style of the labeled region determines the style of the image hot region, and preferably, in order to facilitate recording and determining the image hot region, the image hot region in the present application may include a region composed of pixel values of four boundaries, i.e., upper, lower, left, and right, at this time, only the four pixel values need to be recorded, so that one image hot region may be determined.
Referring to fig. 6, fig. 6 is a second flowchart of a boiler operation data acquisition method according to an embodiment of the present application.
The boiler operation data acquisition method provided by the embodiment of the application can comprise the following steps:
step S201: and acquiring a target image which is acquired by the DCS and contains boiler operation data.
Step S202: and determining the image hot area only containing the boiler operation data in the target image.
Step S203: and segmenting the image hot area in the target image to obtain a hot area image.
Step S204: and carrying out sharpening processing on the hot area image to obtain a sharpened image.
In practical application, in order to make the image contour of the hot area image clearer and facilitate subsequent segmentation, sharpening processing may be performed on the hot area image to obtain a sharpened image. And in a specific application scene, edge sharpening can be performed on the hot area image based on the 5 x 5 array convolution kernel to obtain a sharpened image. Referring to fig. 7, fig. 7 is a sharpened image obtained by sharpening a hot-zone image representing boiler slurry supply flow hot-zone data with a value of 0.16 according to the method of the present application.
Step S205: and converting the sharpened image into a gray-scale image.
In practical application, the color of each pixel in the color image is determined by R, G, B three components, and each component has 0-255 values, so that the color change range of each pixel is large, the data volume is large, the subsequent calculation amount of the image is large, and in order to reduce the calculation amount of the image, the sharpened image can be converted into a gray image, namely a single-channel picture. Referring to fig. 8, fig. 8 is a gray image obtained by converting the sharpened image shown in fig. 7.
Step S206: and carrying out binarization processing on the gray level image to obtain a binarized image.
In practical application, because the binary image contains foreground color and background color, the concerned area is fuzzy, in order to distinguish the foreground color from the background more clearly, the whole image presents obvious black and white effect, the gray level image can be subjected to binary processing, and the binary image is obtained. And in a specific application scene, the gray level image can be subjected to binarization processing based on a large law method to obtain a binarization image. Referring to fig. 9, fig. 9 is a binarized image obtained by converting the grayscale image shown in fig. 8.
Step S207: and carrying out character segmentation on the binary image based on a projection character method to obtain a character segmentation image.
In practical application, in order to facilitate character recognition of the binarized image, the binarized image may be subjected to character segmentation, and the binarized image may be subjected to character segmentation based on a projection character method to obtain a character segmentation map containing only a single character.
In a specific application scene, black pixel accumulation statistics in the longitudinal coordinate direction and the transverse coordinate direction can be respectively carried out on the binary image, and pixel distribution histograms in the vertical direction and the horizontal direction are obtained; and then, based on the pixel gaps of the pixel distribution histogram, performing character segmentation on the binary image to obtain a character segmentation image. Referring to fig. 10 and 11, fig. 10 is a pixel distribution histogram obtained from the binarized image shown in fig. 9, and in fig. 10, the position between two adjacent white regions is the pixel gap of the pixel distribution histogram; fig. 11 is a character segmentation diagram corresponding to fig. 10.
Step S208: and recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result.
Step S209: and performing character combination on the character recognition results based on the sequence of character segmentation to obtain boiler operation data.
In practical application, the neural network method has the advantages of high accuracy, high speed, capability of retraining pictures with wrong recognition and the like when used for recognizing pictures, so that after the character segmentation graph is obtained, the character segmentation graph can be recognized based on the trained neural network model to obtain a character recognition result; and performing character combination on the character recognition results based on the sequence of character segmentation to obtain boiler operation data.
In the boiler operation data acquisition method provided by the embodiment of the application, the neural network model can comprise a model built by a GoogleNet inclusion-3 network structure, the output of the neural network model is 12 probability values, and the 12 probability values respectively represent the probability that the input of the neural network model is 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, or a decimal point or a negative sign;
correspondingly, the step of recognizing the character segmentation graph based on the trained neural network model to obtain the character recognition result may specifically be: identifying the character segmentation graph based on the neural network model to obtain a probability value corresponding to the character segmentation graph; and determining a character recognition result based on the probability value corresponding to the character segmentation graph.
In practical application, in order to improve the recognition capability of the neural network model in the application, the character segmentation graph is recognized based on the trained neural network model, and after a character recognition result is obtained, a target segmentation graph which cannot be recognized or is wrongly recognized in the character segmentation graph can be determined according to the character recognition result; and re-training the neural network model based on the target segmentation graph.
The application of the neural network model provided by the embodiment of the application can be divided into an offline modeling process and an online identification process. In the off-line modeling process, a neural network model can be trained by using 12 types of pictures which are acquired and segmented in advance, the training aims to enable the network to have specific data distribution, and the process is divided into two steps: the neural network is subjected to forward propagation and backward propagation calculation, and since the parameters in the model are random before training is started, the forward propagation is firstly performed after the training is started: obtaining output through input and parameter calculation in a network structure; then, the backward propagation is carried out: the cross entropy loss function is selected to measure the difference between the probability distribution Q (x) calculated by network forward propagation and the real probability distribution P (x), and the formula can be
Figure BDA0002592420070000121
Wherein x isiEach category representing a classification, there are 12 categories in this example. H (p, q) represents a differenceThe number of layers of the neural network model is represented by n, the loss function is the target function of neural network optimization, the process of neural network training or optimization is the process of minimizing the loss function, and because the value of the loss function is small, the value of the corresponding predicted result is closer to the value of the real result; the method for minimizing the loss function applied in the present application is implemented by using a gradient descent algorithm, and the formula of the gradient descent algorithm is as follows:
Figure BDA0002592420070000122
wherein J (theta) is a loss function, alpha is a step length, and the current position is theta0From this point, to reduce to the point where the loss function is minimal, the direction of the gradient is first determined, i.e. θ in the formula0The leading negative sign, representing the direction of descent, followed by a step size of some distance, i.e. theta0After the step length of the segment is finished, theta is reached1At this point, the whole training process can be described as: the matrix in the picture is firstly calculated forward with the initial random parameters of the network, and then the parameters in the network structure are continuously optimized by using a gradient descent algorithm until the value of the loss function is smaller than a certain threshold value.
For convenience of understanding, a Google Net inclusion-3 network structure is described first, the network structure is based on a Convolutional Neural Network (CNN), and as shown in fig. 12, an input image is a matrix, each point in the matrix represents the size of a pixel value of the point, and Convolution operation (Convolution + Convolution) is performed on the matrix, so that feature extraction and data dimension reduction of the image are realized, finally, full connection (full Connected) is established on an output feature map, and a probability value of each category is output, thereby completing the recognition process of a character segmentation map.
The convolutional and pooling layers included in the neural network are described below:
the convolutional layer has the function of extracting the characteristics of input data, the convolutional layer internally comprises a plurality of convolutional kernels, each element forming the convolutional kernels corresponds to a weight coefficient (W) and a deviation amount (b), and when the convolutional layer is in forward propagation: assuming that the nth layer is a convolutional layer and the n-1 st layer is a pooling layer orInputting the layer, the calculation formula of the nth layer is as follows:
Figure BDA0002592420070000131
the jth feature map of the nth layer is shown, and the associated feature maps for the n-1 layer are shown on the right
Figure BDA0002592420070000132
And j-th convolution kernel of n-th layer
Figure BDA0002592420070000133
Performing convolution operation and summation, then adding a bias parameter b, and finally introducing an f () excitation function; the height and width of the image after convolution calculation are: w 2=(W1-K+2P)/S+1,H2=(H1-K +2P)/S +1, where the input and output are represented by the following indices 1 and 2, respectively; the width of an image is W, the height of the image is H, the size of a convolution kernel is K, the step length of convolution operation is S, and the filling size is P; assuming that the input is a single-channel 5 × 5 picture, the convolution kernel size is 3 × 3, and the forward propagation calculation is shown in fig. 13, the first value of the left matrix in fig. 13 is calculated by multiplying each point selected by the convolution kernel correspondingly: 4, then the convolution kernel is slid one step to the right, as shown in the right diagram of fig. 13, and the second value of the right matrix is calculated: 3, by means of dot multiplication among the matrixes, feature extraction and effective data dimension reduction are carried out on each area of the image. It should be noted that in practical application, the size and number of convolution kernels and the step size of the sliding in each layer of convolution operation need to be set, and in addition, the number of convolution layers of the network needs to be set according to requirements.
After the feature extraction is performed on the convolutional layer, the output feature map is transmitted to the pooling layer for feature selection and information filtering. The pooling layer contains a pre-set pooling function whose function is to replace the result of a single point in the feature map with the feature map statistics of its neighboring regions. When forward propagation is carried out on the pooling layer: suppose the nth layer is a pooling layer and the (n-1) th layer is a convolutional layer. The calculation formula of the nth layer is as follows:
Figure BDA0002592420070000134
Among them, down () is a down-sampling function, which is generally a function of adding all pixels in an n × n pixel block, and since down-sampling is applied to an area in an image which is not overlapped, the size of the obtained feature map is the original size
Figure BDA0002592420070000135
Beta represents weight, and for the average pooling function of 2 x 2, the weight beta takes on the value of
Figure BDA0002592420070000136
It can be seen that the number of feature maps passing through the pooling layer is unchanged, but the shape is changed to the original shape
Figure BDA0002592420070000137
In general, the pooling layer weight β is fixed and there is no bias term b and no excitation function f (); assuming that the input is a single-channel 4 × 4 picture, the largest pooling function is selected for pooling operation, the size is 2 × 2, the forward propagation calculation is as shown in fig. 14, the 2 × 2 area in the left picture of fig. 14 is selected to have the maximum value with 2-step sliding step, and the output is the right picture of fig. 14; the advantage of pooling is that not only can further data dimensionality reduction be performed on the convolved image, but pooling can help the representation of the input be approximately invariant when the input is translated by a small amount, and thus pooling has local translation invariance to the processed image.
In CNN, each convolution operation is calculated from convolution (Con) + pooling (pool), i.e. the first two steps in fig. 12. Typically, a non-linear function is added after each layer of convolution to increase the non-linear capability of the model; in fig. 12, there is also a Fully Connected layer (Fully Connected) whose purpose is to expand the output of the previous layer into one-dimensional data and connect to each neuron to make the final output.
The Google Net inclusion-3 applied in the application is based on the CNN convolutional neural network, but the following improvements are made:
1. because different sizes of the convolution kernels can directly influence the recognition accuracy, the network architecture adds the mode that the network determines the size of the convolution kernel in each layer according to the training set, the complexity of the network is increased in a mode of increasing the width of the network, and traps for selecting the convolution kernels are avoided. Specifically, 1 × 1, 3 × 3, 5 × 5 convolution kernels are used in parallel for pooling while extracting features of different scales. As shown in fig. 15.
2. In the training and classifying process, in order to avoid the overfitting problem caused by increasing the complexity of the model, 1-by-1 convolution calculation is added before each layer is subjected to convolution. Here, 1 × 1 convolution has no influence on the image itself, and mathematically is only the simplest matrix multiplication operation, and the most important role is to reduce the number of feature maps for the purpose of dimension reduction, and it is due to the existence of 1 × 1 convolution kernel, so that the weight parameter of Google Net inclusion-3 is only 1!of 36 points of VGGNets with the same recognition rate! This saves training time and the need for device storage and computing power considerably.
3. The Google Net inclusion-3 network framework splits a large two-dimensional convolution into two smaller one-dimensional convolutions, such as 7 x 7 convolution into 1 x 7 convolution and 7 x 1 convolution, or 3 x 3 convolution into 1 x 3 convolution and 3 x 1 convolution. On one hand, a large number of parameters are saved, the operation is accelerated, the overfitting is reduced, and meanwhile, a layer of nonlinear expansion model expression capacity is added.
In the application, when a neural network model built by a Google Net addition-3 network framework is trained, 2880 single images segmented by a boiler DCS operation diagram can be used as a training set, the number of a verification set and the number of a test set are both 360, the input end of the model is a single number 227 x 277, and the figure 16 lists a number '5' training set, and the other similar operations are carried out. The output of the network is 12 probability values, and the 12 probability values respectively correspond to the possibility that the photo is in 12 categories of 0-9, decimal point and negative sign.
In addition, tests prove that the boiler operation data acquisition method provided by the application has the segmentation accuracy of 98%, the neural network identification accuracy of 99.9%, the overall identification accuracy of more than 97%, and better identification accuracy.
Referring to fig. 17, fig. 17 is a schematic structural diagram of a boiler operation data acquisition system according to the present application.
The boiler operation data acquisition system that this application embodiment provided can include:
a first obtaining module 101, configured to obtain a target image including boiler operation data;
the first determining module 102 is configured to determine an image hot area only containing boiler operation data in the target image;
the first segmentation module 103 is configured to segment an image hot area in the target image to obtain a hot area image;
the first identification module 104 is used for identifying the hot zone image to obtain boiler operation data.
According to the boiler operation data acquisition system provided by the embodiment of the application, the first determining module can comprise:
the first identification submodule is used for identifying a region to be acquired in the target image based on a contour matching method;
and the first dividing module is used for dividing the point location area of the area to be acquired to obtain an image hot area.
In the boiler operation data acquisition system provided in the embodiment of the present application, the first identification submodule may include:
the first matching unit is used for matching the target image with a preset template image to obtain a matching area, and the template image represents the shape characteristics of the area to be acquired;
The first calculation unit is used for calculating the matching degree of the matching area and the template image based on a normalized squared difference matching method;
the first determining unit is used for determining the matching area with the highest matching degree as the area to be acquired;
the first scoring module may include:
the first conversion unit is used for converting the RGB image of the area to be acquired into an HSV image to obtain a conversion area map;
the first filtering unit is used for carrying out median filtering on the conversion area graph to obtain a filtering area graph;
the first acquisition unit is used for acquiring the respective HSV ranges of the red, yellow and green colors of the preset boiler operation data;
the first searching unit is used for searching an image area consistent with the HSV range in the filtering area map;
and the first processing sub-module is used for removing the non-digital area in the image area to obtain an image hot area.
In an embodiment of the present application, a first processing sub-module of the boiler operation data acquisition system may include:
the first screening unit is used for screening the data arrangement sequence of the image areas and determining the image areas which accord with the preset arrangement sequence as image hot areas; and/or cutting out a symbol area in the image area to obtain an image hot area; and/or determining an image area in the image area which accords with the preset data length as an image hot area.
According to the boiler operation data acquisition system provided by the embodiment of the application, the first determining module can comprise:
the second acquisition unit is used for acquiring a preset image with the same specification as the target image, and the preset image comprises a preset labeling area of the image hot area;
and the first dividing unit is used for dividing the target image into areas according to the marked areas to obtain an image hot area.
According to the boiler operation data acquisition system provided by the embodiment of the application, the image hot area can comprise an area formed by pixel values of four boundaries, namely an upper boundary, a lower boundary, a left boundary and a right boundary.
According to the boiler operation data acquisition system provided by the embodiment of the application, the first identification module can comprise:
the second processing submodule is used for carrying out sharpening processing on the hot area image to obtain a sharpened image;
the second conversion submodule is used for converting the sharpened image into a gray image;
the third processing submodule is used for carrying out binarization processing on the gray level image to obtain a binarization image;
the first segmentation submodule is used for carrying out character segmentation on the binary image based on a projection character method to obtain a character segmentation image;
the second recognition submodule is used for recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result;
And the first combination submodule is used for carrying out character combination on the character recognition result based on the character segmentation sequence to obtain boiler operation data.
In the boiler operation data acquisition system provided in the embodiment of the present application, the second processing submodule may include:
the first processing unit is used for carrying out edge sharpening on the hot area image based on the 5 x 5 array convolution kernel to obtain a sharpened image;
the third processing submodule may include:
the second processing unit is used for carrying out binarization processing on the gray level image based on a large law method to obtain a binarization image;
the first segmentation sub-module may include:
the first statistical unit is used for performing black pixel accumulation statistics on the binary image in the longitudinal coordinate direction and the transverse coordinate direction respectively to obtain pixel distribution histograms in the vertical direction and the horizontal direction;
and the first segmentation unit is used for carrying out character segmentation on the binary image based on the pixel gaps of the pixel distribution histogram to obtain a character segmentation image.
According to the boiler operation data acquisition system provided by the embodiment of the application, the neural network model comprises a model built by a GoogleNet inclusion-3 network structure, the output of the neural network model is 12 probability values, and the 12 probability values respectively represent the probability that the input of the neural network model is 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, or decimal point or negative sign;
The second identification submodule may include:
the first identification unit is used for identifying the character segmentation graph based on the neural network model to obtain a probability value corresponding to the character segmentation graph;
and the second determining unit is used for determining the character recognition result based on the probability value corresponding to the character segmentation graph.
The boiler operation data acquisition system that this application embodiment provided can also include:
the second determination module is used for the first recognition module to recognize the character segmentation graph based on the trained neural network model, and after a character recognition result is obtained, determining a target segmentation graph which belongs to the character segmentation graph and cannot be recognized or is recognized wrongly according to the character recognition result;
and the first training module is used for retraining the neural network model based on the target segmentation graph.
The application also provides boiler operation data acquisition equipment and a computer readable storage medium, which have the corresponding effects of the boiler operation data acquisition method provided by the embodiment of the application. Referring to fig. 18, fig. 18 is a schematic structural diagram of a boiler operation data acquisition device according to an embodiment of the present application.
The boiler operation data acquisition device provided by the embodiment of the application comprises a memory 201 and a processor 202, wherein a computer program is stored in the memory 201, and the steps described in any one of the above embodiments are realized when the processor 202 executes the computer program.
Referring to fig. 19, another boiler operation data acquisition device provided in the embodiment of the present application may further include: an input port 203 connected to the processor 202, for transmitting externally input commands to the processor 202; a display unit 204 connected to the processor 202, for displaying the processing result of the processor 202 to the outside; and the communication module 205 is connected with the processor 202 and is used for realizing the communication between the boiler operation data acquisition equipment and the outside. The display unit 204 may be a display panel, a laser scanning display, or the like; the communication method adopted by the communication module 205 includes, but is not limited to, mobile high definition link technology (HML), Universal Serial Bus (USB), High Definition Multimedia Interface (HDMI), and wireless connection: wireless fidelity technology (WiFi), bluetooth communication technology, bluetooth low energy communication technology, ieee802.11s based communication technology.
The computer-readable storage medium provided in the embodiments of the present application stores a computer program, and the computer program, when executed by a processor, implements the steps described in any of the above embodiments.
The computer-readable storage media to which this application relates include Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage media known in the art.
For a description of relevant parts in the boiler operation data acquisition system, the boiler operation data acquisition equipment and the computer-readable storage medium provided in the embodiment of the present application, reference is made to detailed descriptions of corresponding parts in the boiler operation data acquisition method provided in the embodiment of the present application, and details are not repeated herein. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of collecting boiler operational data, comprising:
acquiring a target image containing boiler operation data;
determining an image hot area only containing the boiler operation data in the target image;
segmenting the image hot area in the target image to obtain a hot area image;
and identifying the hot area image to obtain the boiler operation data.
2. The method of claim 1, wherein said determining image hotspots in the target image that only contain the boiler operating data comprises:
matching the target image with a preset template image to obtain a matching area, wherein the template image represents the shape characteristics of the area to be acquired;
Calculating the matching degree of the matching area and the template image based on a normalized squared difference matching method;
determining the matching area with the highest matching degree as an area to be acquired;
converting the RGB image of the area to be acquired into an HSV image to obtain a conversion area map;
performing median filtering on the conversion area map to obtain a filtering area map;
acquiring HSV ranges of red, yellow and green colors of preset boiler operation data;
finding out an image area consistent with the HSV range in the filtering area map;
and removing non-digital areas in the image area to obtain the image hot area.
3. The method of claim 2, wherein said removing non-digital regions from said image region to obtain said image hot zone comprises:
screening the data arrangement sequence of the image areas, and determining the image areas which accord with a preset arrangement sequence as the image hot areas;
and/or cutting out a symbol area in the image area to obtain the image hot area;
and/or determining an image area in the image area according with a preset data length as the image hot area.
4. The method of claim 1, wherein said identifying the hot zone image to obtain the boiler operating data comprises:
carrying out sharpening processing on the hot area image to obtain a sharpened image;
converting the sharpened image into a grayscale image;
carrying out binarization processing on the gray level image to obtain a binarized image;
carrying out character segmentation on the binary image based on a projection character method to obtain a character segmentation image;
recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result;
and performing character combination on the character recognition result based on the character segmentation sequence to obtain the boiler operation data.
5. The method of claim 4, wherein the sharpening the hot region image to obtain a sharpened image comprises:
performing edge sharpening on the hot area image based on the 5 x 5 array convolution kernel to obtain a sharpened image;
the binarization processing of the gray level image to obtain a binarized image comprises the following steps:
carrying out binarization processing on the gray level image based on a large law method to obtain a binarized image;
The character segmentation is carried out on the binary image based on a projection character method to obtain a character segmentation image, and the method comprises the following steps:
black pixel accumulation statistics in the longitudinal coordinate direction and the transverse coordinate direction are respectively carried out on the binary image, and pixel distribution histograms in the vertical direction and the horizontal direction are obtained;
and carrying out character segmentation on the binary image based on the pixel gaps of the pixel distribution histogram to obtain the character segmentation image.
6. The method of claim 4, wherein the neural network model comprises a model constructed with a Google NetIncection-3 network structure, and the output of the neural network model is 12 probability values, wherein the 12 probability values respectively represent the probability that the input of the neural network model is 0 or 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 or decimal point or negative sign;
the method for recognizing the character segmentation graph based on the trained neural network model to obtain a character recognition result comprises the following steps:
identifying the character segmentation graph based on the neural network model to obtain a probability value corresponding to the character segmentation graph;
and determining the character recognition result based on the probability value corresponding to the character segmentation graph.
7. The method of claim 6, wherein after the recognizing the character segmentation graph based on the trained neural network model and obtaining a character recognition result, the method further comprises:
determining a target segmentation graph which cannot be identified or is identified wrongly in the character segmentation graph according to the character identification result;
retraining the neural network model based on the target segmentation graph.
8. A boiler operation data acquisition system, comprising:
the first acquisition module is used for acquiring a target image containing boiler operation data;
the first determining module is used for determining an image hot area only containing the boiler operation data in the target image;
the first segmentation module is used for segmenting the image hot area in the target image to obtain a hot area image;
and the first identification module is used for identifying the hot zone image to obtain the boiler operation data.
9. A boiler operation data acquisition device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the boiler operation data acquisition method according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the boiler operation data collection method according to any one of claims 1 to 7.
CN202010699244.9A 2020-07-20 2020-07-20 Boiler operation data acquisition method, system, equipment and computer medium Pending CN111860317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010699244.9A CN111860317A (en) 2020-07-20 2020-07-20 Boiler operation data acquisition method, system, equipment and computer medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010699244.9A CN111860317A (en) 2020-07-20 2020-07-20 Boiler operation data acquisition method, system, equipment and computer medium

Publications (1)

Publication Number Publication Date
CN111860317A true CN111860317A (en) 2020-10-30

Family

ID=73001092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010699244.9A Pending CN111860317A (en) 2020-07-20 2020-07-20 Boiler operation data acquisition method, system, equipment and computer medium

Country Status (1)

Country Link
CN (1) CN111860317A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883648A (en) * 2021-02-23 2021-06-01 一汽解放汽车有限公司 Training method and device for automobile fuel consumption prediction model and computer equipment
CN113647920A (en) * 2021-10-21 2021-11-16 青岛美迪康数字工程有限公司 Method and device for reading vital sign data in monitoring equipment
CN117612078A (en) * 2023-10-08 2024-02-27 成都格理特电子技术有限公司 Image-based hearth flame detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038626A (en) * 2007-04-25 2007-09-19 上海大学 Method and device for recognizing test paper score
CN106909941A (en) * 2017-02-27 2017-06-30 广东工业大学 Multilist character recognition system and method based on machine vision
CN108009538A (en) * 2017-12-22 2018-05-08 大连运明自动化技术有限公司 A kind of automobile engine cylinder-body sequence number intelligent identification Method
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN109447055A (en) * 2018-10-17 2019-03-08 甘肃万维信息技术有限责任公司 One kind being based on OCR character recognition method familiar in shape
CN109726717A (en) * 2019-01-02 2019-05-07 西南石油大学 A kind of vehicle comprehensive information detection system
CN110929573A (en) * 2019-10-18 2020-03-27 平安科技(深圳)有限公司 Examination question checking method based on image detection and related equipment
CN111368943A (en) * 2020-05-27 2020-07-03 腾讯科技(深圳)有限公司 Method and device for identifying object in image, storage medium and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038626A (en) * 2007-04-25 2007-09-19 上海大学 Method and device for recognizing test paper score
CN106909941A (en) * 2017-02-27 2017-06-30 广东工业大学 Multilist character recognition system and method based on machine vision
CN108009538A (en) * 2017-12-22 2018-05-08 大连运明自动化技术有限公司 A kind of automobile engine cylinder-body sequence number intelligent identification Method
CN108664996A (en) * 2018-04-19 2018-10-16 厦门大学 A kind of ancient writing recognition methods and system based on deep learning
CN109447055A (en) * 2018-10-17 2019-03-08 甘肃万维信息技术有限责任公司 One kind being based on OCR character recognition method familiar in shape
CN109726717A (en) * 2019-01-02 2019-05-07 西南石油大学 A kind of vehicle comprehensive information detection system
CN110929573A (en) * 2019-10-18 2020-03-27 平安科技(深圳)有限公司 Examination question checking method based on image detection and related equipment
CN111368943A (en) * 2020-05-27 2020-07-03 腾讯科技(深圳)有限公司 Method and device for identifying object in image, storage medium and electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883648A (en) * 2021-02-23 2021-06-01 一汽解放汽车有限公司 Training method and device for automobile fuel consumption prediction model and computer equipment
CN112883648B (en) * 2021-02-23 2022-06-17 一汽解放汽车有限公司 Training method and device for automobile fuel consumption prediction model and computer equipment
CN113647920A (en) * 2021-10-21 2021-11-16 青岛美迪康数字工程有限公司 Method and device for reading vital sign data in monitoring equipment
CN117612078A (en) * 2023-10-08 2024-02-27 成都格理特电子技术有限公司 Image-based hearth flame detection method

Similar Documents

Publication Publication Date Title
CN107609549B (en) Text detection method for certificate image in natural scene
WO2021000702A1 (en) Image detection method, device, and system
CN111860317A (en) Boiler operation data acquisition method, system, equipment and computer medium
CN113658132B (en) Computer vision-based structural part weld joint detection method
US8254679B2 (en) Content-based image harmonization
CN110766017B (en) Mobile terminal text recognition method and system based on deep learning
JP5974589B2 (en) Image processing apparatus and program
CN110390643B (en) License plate enhancement method and device and electronic equipment
JP5337563B2 (en) Form recognition method and apparatus
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN110751619A (en) Insulator defect detection method
CN111401380A (en) RGB-D image semantic segmentation method based on depth feature enhancement and edge optimization
CN105225218A (en) For distortion correction method and the equipment of file and picture
CN113902641A (en) Data center hot area distinguishing method and system based on infrared image
JP6294524B1 (en) Image processing method and computer program
CN104268845A (en) Self-adaptive double local reinforcement method of extreme-value temperature difference short wave infrared image
Huang et al. M2-Net: multi-stages specular highlight detection and removal in multi-scenes
CN113723410B (en) Digital identification method and device for nixie tube
US11410278B2 (en) Automatic artifact removal in a digital image
JP6546385B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
KR20110019117A (en) Semantic based image retrieval method
CN114511862B (en) Form identification method and device and electronic equipment
CN109141457A (en) Navigate appraisal procedure, device, computer equipment and storage medium
CN115359562A (en) Sign language letter spelling recognition method based on convolutional neural network
JP4967045B2 (en) Background discriminating apparatus, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210601

Address after: 266073 room 402, Block E, 67 and 69 Yinchuan West Road, Shinan District, Qingdao City, Shandong Province

Applicant after: Qingdao Trier Technology Co.,Ltd.

Address before: 266071 e402, 403, No. 67, 69, Yinchuan West Road, Shinan District, Qingdao City, Shandong Province

Applicant before: QINGDAO CLEAR ENVIRONMENTAL PROTECTION GROUP Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030