CN112232235B - Intelligent factory remote monitoring method and system based on 5G sum algorithm - Google Patents

Intelligent factory remote monitoring method and system based on 5G sum algorithm Download PDF

Info

Publication number
CN112232235B
CN112232235B CN202011126971.2A CN202011126971A CN112232235B CN 112232235 B CN112232235 B CN 112232235B CN 202011126971 A CN202011126971 A CN 202011126971A CN 112232235 B CN112232235 B CN 112232235B
Authority
CN
China
Prior art keywords
particle
particles
image
iteration
denotes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011126971.2A
Other languages
Chinese (zh)
Other versions
CN112232235A (en
Inventor
罗子尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI LONGTIAN DIGITAL TECHNOLOGY Co.,Ltd.
Original Assignee
Liao Liufa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liao Liufa filed Critical Liao Liufa
Priority to CN202011126971.2A priority Critical patent/CN112232235B/en
Publication of CN112232235A publication Critical patent/CN112232235A/en
Application granted granted Critical
Publication of CN112232235B publication Critical patent/CN112232235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of equipment running state monitoring, and particularly relates to a 5G-based intelligent factory remote monitoring method and a system, wherein a plurality of working areas are arranged in a factory, an image acquisition module is arranged in each working area, and the image acquisition module acquires running state data of equipment in the working area at preset intervals; segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image; comparing and analyzing the segmented target image with characteristic information of abnormal operation of the production equipment in an abnormal event database, judging whether the production equipment has an operation fault, and if so, determining that the production equipment has an abnormal event; when an abnormal event exists, acquiring a fault level corresponding to the abnormal event, and acquiring a corresponding abnormal processing strategy according to the fault level; the method has the advantages of improving the speed and the precision of image segmentation and improving the reliability of monitoring the running state of the equipment.

Description

Intelligent factory remote monitoring method and system based on 5G sum algorithm
Technical Field
The invention belongs to the technical field of equipment running state monitoring, and particularly relates to a 5G and algorithm-based intelligent factory remote monitoring method and system.
Background
In all production safety accidents, the safety accidents of factories have obvious characteristics, and the fire and explosion accidents of factories not only often cause casualties and major property loss, but also easily cause serious environmental problems and influence residents and construction in nearby areas.
Currently, wireless monitoring technology is often used for factory safety monitoring, a video sensor is arranged at a specific position to monitor equipment or scenes in a factory, video or image data are transmitted by using radio waves, live images of key positions are transmitted to a terminal in real time, and an emergency can be automatically alarmed.
In such wireless monitoring systems, a video sensor is responsible for shooting a monitored area, converting videos and images into corresponding electric signals, and transmitting the electric signals by using a sensor node; in image processing, the requirement for the acquired image is high, however, because the factory environment is complex, the speed and precision of image processing are affected by factors such as uneven illumination and overhigh illumination intensity in the image acquisition process, and therefore the acquired image needs to be processed to facilitate subsequent processing such as positioning, detection and measurement.
The image segmentation is a technology and a process for dividing an image into a plurality of specific areas with unique properties and providing an interested target, mainly extracts and measures the image target, and is a basis and a key step from image processing to image recognition and analysis; for image segmentation methods, there are many processing methods, such as an edge detection segmentation method, a region segmentation method, and a threshold segmentation method, wherein the threshold segmentation method is widely applied, but the conventional threshold segmentation method has a slow operation speed when performing multi-threshold segmentation on an image.
The maximum inter-class variance method is also called an Otsu method, is a very simple and general method, and has the greatest characteristic of realizing the self-adaption of image segmentation, and the basic principle of the method is as follows: by finding a threshold value which enables the variance between the two parts of the background and the target to reach the maximum, and then separating the image target from the background by using the threshold value, the existing Otsu method cannot accurately keep the shape of the boundary complete and has low segmentation accuracy under the conditions of exposure and low contrast between the background and the target.
Disclosure of Invention
The invention overcomes the defects of the prior art, and solves the technical problems that: the intelligent factory remote monitoring method and system based on the 5G sum algorithm improve the image segmentation speed and precision and can improve the reliability of monitoring the running state of equipment by combining the Otsu algorithm and the particle swarm algorithm.
In order to solve the technical problems, the invention adopts the technical scheme that:
the intelligent factory remote monitoring method based on the 5G sum algorithm comprises the following steps:
a plurality of working areas are arranged in the factory, an image acquisition module is arranged in each working area, and the image acquisition module acquires the running state data of equipment in the working area at preset intervals;
segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image;
comparing and analyzing the segmented target image with characteristic information of abnormal operation of the production equipment in an abnormal event database, judging whether the production equipment has an operation fault, and if so, determining that the production equipment has an abnormal event;
when an abnormal event exists, acquiring a fault level corresponding to the abnormal event, and acquiring a corresponding abnormal processing strategy according to the fault level;
wherein: the abnormal event database stores characteristic information of abnormal operation of the production equipment and fault levels corresponding to the characteristic information.
Preferably, the segmenting processing is performed on the received image by using an Otsu algorithm and a particle swarm algorithm to obtain a target image in the device image, and specifically includes:
filtering the acquired image;
determining an optimal segmentation threshold value of the equipment area image by adopting an Otsu algorithm and a particle swarm algorithm;
and performing threshold segmentation on the image according to the optimal segmentation threshold and outputting the segmented image.
Preferably, the determining an optimal segmentation threshold of the device region image by using an Otsu algorithm and a particle swarm algorithm specifically includes:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
Preferably, the fitness function of the particle swarm algorithm is the maximum between-class variance.
Preferably, the encoding of the gray-scale values of the image comprises: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
Preferably, the particle swarm algorithm is updated in the following manner:
Va(t+1)=ωa(t)Va(t)+c1r1(Ga(t)-Xa(t))+c2r2(G(t)-Xa(t))
Xa(t+1)=Xa(t)+Va(t+1)
in the formula, Va(t +1) and Xa(t +1) denotes the velocity and position of the particle a at the (t +1) th iteration, Va(t) and Xa(t) velocity and position of the particle a at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, G (t) representing the global optimal solution of the particle population at the t-th iteration, ωa(t) represents the inertial weight factor, G, of particle i at the t-th iterationa(t) represents the local learning solution for particle a at the t-th iteration, and the local learning solution G for particle a at the t-th iterationaThe value of (t) is determined in the following manner:
selecting reference particles for determining a local learning solution from the particle swarm, wherein the selected reference particles mark particles which are not selected as the reference particles in the particle swarm, and the specific steps are as follows:
firstly, selecting a current fitness function value in a particle swarmThe largest particle is the first reference particle, denoted b1(t) if m1(t) denotes a reference particle b1(t) the number of particles to be marked, the reference particle b1(t) dividing m nearest to the particle group1(t) marking the particles as 1, wherein the distance is the Euclidean distance between the particles, and continuously selecting the particle with the largest fitness function value as a second reference particle which is marked as b from the particles which are not marked and not selected as reference particles in the particle swarm2(t) if m2(t) denotes a reference particle b2(t) the number of particles to be marked, the reference particle b2(t) dividing m nearest to the particle group2(t) marking the particles which are not selected as the reference particles as 2, continuously selecting the reference particles from the particle swarm according to the method, continuously marking the particles which are not selected as the reference particles in the particle swarm by the selected reference particles, and stopping the selection of the reference particles until the particles in the particle swarm are all marked by the reference particles;
let B (t) denote the set of reference particles selected at the t-th iteration, and
B(t)={bv(t), v ═ 1, 2.., m (b (t)) }, where b isv(t) represents the selected v reference particle, m (B (t)) represents the number of selected reference particles in the t iteration, and m isv(t) denotes a reference particle bv(t) the number of particles to be labeled, then mvThe value of (t) is:
Figure GDA0002955200940000042
wherein M represents an initial value of the number of particles to be marked of a given reference particle, and M < M, M being the number of particles in the population, ηv(t) denotes a reference particle bv(t) a priority of optimization, and
Figure GDA0002955200940000041
wherein f isv(t) denotes a reference particle bv(t) fitness function value, fo(t) denotes a distance reference particle bv(t) nearest o-th particleFitness function value at the t-th iteration, fmax(t) represents the maximum fitness function value of the population at the t-th iteration, fmin(t) represents the minimum fitness function value for the population at the t-th iteration;
let B be a set of reference particles marked with particle i in set B (t)i(t), and Bi(t)={bi,c(t),c=1,...,m(Bi(t)) }, in which, bi,c(t) denotes the c-th reference particle marked with particle i in set B (t), m (B)i(t)) represents the number of reference particles marked with particle i in set B (t), and defines χ (b)i,c(t)) represents a reference particle bi,c(t) reference value for particle i, then x (b)i,c(t)) is:
Figure GDA0002955200940000051
in the formula, alpha (f)i(t),fi,c(t)) is a conditional function, fi,c(t) denotes a reference particle bi,c(t) fitness function value, fi(t) represents the fitness function value of particle i at the t-th iteration, when fi(t)<fi,cAt (t), then alpha (f)i(t),fi,c(t)). 1, when fi(t)≥fi,cAt (t), then alpha (f)i(t),fi,c(t))=0,ηi,c(t) denotes a reference particle bi,c(t) priority of optimization, Xi,c(t) denotes a reference particle bi,c(t) position at the t-th iteration, l (X)i(t),Xi,c(t)) represents position Xi(t) and position Xi,c(t) Euclidean distance between;
when particle i is selected as the reference particle at the tth iteration, then the local learning solution G for particle i at the tth iterationi(t)=Besti(t), wherein Besti(t) represents the historical optimal solution of particle i at the tth iteration;
let b be when particle i is not selected as the reference particle at the t-th iterationi,A(t) representsSet BiThe reference particle of (t) having the greatest reference value for particle i, i.e.
Figure GDA0002955200940000052
bi,B(t) represents a set Bi(t) a reference particle having the second largest reference value for particle i, and
Figure GDA0002955200940000053
the local learning solution G of particle i at the t-th iterationiThe expression of (t) is:
Gi(t)=r3Xi,A(t)+(1-r3)Xi,B(t)
in the formula, r3Denotes a random number between 0 and 1, Xi,A(t) denotes a reference particle bi,A(t) position at the t-th iteration, Xi,B(t) denotes a reference particle bi,B(t) position at the tth iteration.
Correspondingly, the intelligent factory remote monitoring system based on 5G and algorithm is characterized in that a plurality of working areas are arranged in a factory, an image acquisition module is arranged in each working area, and the image acquisition modules are in communication connection with a remote control center through 5G;
the image acquisition module is used for acquiring running state data of equipment in the working area at preset time intervals;
the remote control center includes:
the storage unit is used for storing an abnormal event database, and the abnormal event database is provided with characteristic information of abnormal operation of production equipment and fault grades corresponding to the characteristic information;
the image processing unit is used for segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image;
the image analysis unit is used for comparing and analyzing the target image obtained by segmentation with the characteristic information of the production equipment in the abnormal event database during abnormal operation, judging whether the production equipment has operation faults or not, and if so, determining that the production equipment has abnormal events;
and the fault identification unit is used for acquiring a fault grade corresponding to the abnormal event when the abnormal event exists, and acquiring a corresponding abnormal processing strategy according to the fault grade.
Preferably, the image processing unit is configured to perform filtering processing and target segmentation on the received image, and specifically includes:
the filtering unit is used for carrying out filtering processing on the acquired image;
the computing unit is used for determining the optimal segmentation threshold of the image by adopting an Otsu algorithm and a particle swarm algorithm;
and the output unit is used for performing threshold segmentation on the image according to the optimal segmentation threshold and outputting the segmented image.
Preferably, the determining an optimal segmentation threshold of the image by using Otsu algorithm and particle swarm algorithm includes:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
Preferably, the fitness function of the particle swarm algorithm is the maximum between-class variance.
Preferably, the encoding of the gray-scale values of the image comprises: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the Otsu algorithm and the particle swarm algorithm are combined and applied to the intelligent factory, the running state monitoring of the unattended equipment can be realized, and when the intelligent monitoring system is used, the obtained running state image of the equipment is segmented, so that the speed of image threshold segmentation can be increased, the accuracy of image segmentation can be increased, the reliability of equipment monitoring is further improved, and the practicability is strong.
2. When an image is segmented, firstly, the input image is subjected to contrast expansion transformation preprocessing, and a gray level histogram of the image is calculated; then, encoding the gray value of the image, and randomly generating M initial populations; calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function; updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process; finally, judging whether the conditions are met, finishing optimization, outputting a global optimal solution, and enabling the global optimal solution finally searched by the particle swarm to be the most optimal segmentation threshold; the particle swarm algorithm adopted by the invention has stronger concurrency, so that the invention not only can obtain better segmentation effect, but also has higher operation efficiency.
3. In the invention, the contrast expansion transformation preprocessing is carried out on the input image, so that the discrimination of the target area and the background area in the image can be increased, and the segmentation error is reduced for the later image segmentation.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings;
FIG. 1 is a schematic flow chart of a method for intelligent factory remote monitoring based on 5G and algorithm according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for remotely monitoring an intelligent factory based on 5G and algorithm according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of determining an optimal segmentation threshold of an equipment region image by using an Otsu algorithm and a particle swarm algorithm in the third embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an intelligent factory remote monitoring system based on 5G and algorithm according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a smart factory remote monitoring system based on 5G and algorithm according to a second embodiment of the present invention;
in the figure: 1 is an image acquisition module, and 2 is a remote control center;
201 is a storage unit, 202 is an image processing unit, 203 is an image analysis unit, and 204 is a fault identification unit;
2021 is a filtering unit, 2022 is a calculating unit, and 2023 is an output unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
To facilitate an understanding of the embodiments of the present disclosure, the following detailed description is to be considered in conjunction with the accompanying drawings, and the drawings are not to be construed as limiting the embodiments of the present disclosure.
In one embodiment of the invention:
FIG. 1 is a schematic flow chart of a method for intelligent factory remote monitoring based on 5G and algorithm according to an embodiment of the present invention; as shown in fig. 1, the intelligent factory remote monitoring method based on 5G and algorithm includes the following steps:
a plurality of working areas are arranged in the factory, an image acquisition module is arranged in each working area, and the image acquisition module acquires the running state data of equipment in the working area at preset intervals;
segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image;
comparing and analyzing the segmented target image with characteristic information of abnormal operation of the production equipment in an abnormal event database, judging whether the production equipment has an operation fault, and if so, determining that the production equipment has an abnormal event;
when an abnormal event exists, acquiring a fault level corresponding to the abnormal event, and acquiring a corresponding abnormal processing strategy according to the fault level;
wherein: the abnormal event database stores characteristic information of abnormal operation of the production equipment and fault levels corresponding to the characteristic information.
The factory is internally provided with a plurality of working areas, each working area is internally provided with an image acquisition module, and the image acquisition module acquires the running state data of equipment in the working area at preset time intervals; when the system is used, each image acquisition module uploads the acquired running state data of the equipment in the area to a remote control center, and the remote control center obtains a target image in the equipment image through calculation; judging whether the production equipment has operation faults or not, and if so, determining that the production equipment has abnormal events; when an abnormal event exists, acquiring a fault level corresponding to the abnormal event, and acquiring a corresponding abnormal processing strategy according to the fault level; according to the invention, the fault diagnosis of the production equipment can be effectively realized, and the abnormal processing strategy is adopted to timely cope with the fault diagnosis, so that the operation supervision of the production equipment is realized, and the safety accident caused by the fault of the production equipment is avoided.
Specifically, the failure levels may be divided into three levels:
first level failure represents: the equipment has serious faults, and needs to evacuate field workers in an emergency way and is subjected to power-off maintenance;
the second level fault indicates: major faults exist in the equipment, and the equipment needs to be overhauled immediately;
the third level fault indicates: the equipment has potential safety hazard, but the condition is not endangered, and important inspection should be carried out during overhaul.
Further, the exception handling policy includes: cutting off the working circuit of the equipment; and/or issuing an evacuation voice signal; and/or alarm information is sent to the watch keeper in an audible and visual alarm mode; and/or sending alarm information to the watch keeper through dialing; and/or converse with the watch keeper through remote audio-video intercom.
In the embodiment, the image segmentation is to segment the gray level of an image into a plurality of non-overlapping regions, so that the difference between different regions is the largest, and the same region has a certain commonality.
The particle swarm algorithm is a parallel algorithm, starts from a random solution, firstly randomly initializes a group of particles, each particle represents a candidate solution, and evaluates the advantages and disadvantages of the candidate solutions through a fitness function; during each search, each particle searches through the candidate solution space by following two "cannels": one is the optimal value that the particle itself has searched so far, i.e. the local optimal solution; the other is the optimal value that the whole population has searched so far, i.e. the global optimal solution. Since each search is a further search based on the best results of the previous search, the quality of the results of the entire group search is also increasing as the number of iterations increases.
In the embodiment, an Otsu algorithm and a particle swarm algorithm are combined, and each candidate solution is called a particle and is represented by the position of the particle; the approximation process of each candidate solution to the target solution is the approximation of the initial position of the similar particle to the target position; judging whether the positions of all the particles are good or bad according to the fitness value determined by the fitness function, selecting the class spacing variance as the fitness function, finding the optimal solution of the image segmentation problem through a particle swarm algorithm, and segmenting the image according to the optimal solution; specifically, the population size of the particle swarm, i.e., the number of particles, can be preset empirically, and is generally set to be more than 10, so that the number of particles is too small, and the particle swarm is easy to fall into a local optimal solution;
the Otsu algorithm and the particle swarm algorithm are combined and applied to the intelligent factory, running state monitoring of unattended equipment can be achieved, when the intelligent monitoring system is used, the obtained running state image of the equipment is segmented, the speed of image threshold segmentation can be increased, the accuracy of image segmentation can be increased, the reliability of equipment monitoring is improved, and the practicability is high.
In another embodiment of the invention:
FIG. 2 is a schematic flow chart of a method for remotely monitoring an intelligent factory based on 5G and algorithm according to a second embodiment of the present invention; as shown in fig. 2, on the basis of the first embodiment, the intelligent plant remote monitoring method based on 5G and an algorithm, the segmenting processing is performed on the received image by using an Otsu algorithm and a particle swarm algorithm to obtain a target image in the device image, and specifically includes:
filtering the acquired image;
determining an optimal segmentation threshold value of the equipment area image by adopting an Otsu algorithm and a particle swarm algorithm;
and performing threshold segmentation on the image according to the optimal segmentation threshold and outputting the segmented image.
In another embodiment of the invention:
FIG. 3 is a schematic flow chart of a method for remotely monitoring an intelligent factory based on 5G and an algorithm according to a third embodiment of the present invention; as shown in fig. 3, on the basis of the second embodiment, the intelligent plant remote monitoring method based on 5G and an algorithm, wherein the determining an optimal segmentation threshold of the device area image by using an Otsu algorithm and a particle swarm algorithm specifically includes:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
Specifically, the fitness function of the particle swarm algorithm is the maximum between-class variance.
Further, the encoding the gray-scale value of the image comprises: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
In the embodiment, when an image is segmented, firstly, contrast expansion transformation preprocessing is carried out on an input image, and a gray level histogram of the image is calculated; then, encoding the gray value of the image, and randomly generating M initial populations; calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function; updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process; finally, judging whether the conditions are met, finishing optimization, outputting a global optimal solution, and enabling the global optimal solution finally searched by the particle swarm to be the most optimal segmentation threshold; the particle swarm algorithm adopted by the invention has stronger concurrency, so that the invention not only can obtain better segmentation effect, but also has higher operation efficiency.
In addition, in this embodiment, by performing contrast expansion transformation preprocessing on the input image, the degree of distinction between the target region and the background region in the image can be increased, and the segmentation error is reduced for the later image segmentation.
In another embodiment of the invention:
fig. 4 is a schematic structural diagram of an intelligent factory remote monitoring system based on 5G and an algorithm according to an embodiment of the present invention, as shown in fig. 4, the intelligent factory remote monitoring system based on 5G and an algorithm is provided, a plurality of working areas are provided in the factory, an image acquisition module 10 is provided in each working area, and the image acquisition module 10 is in communication connection with a remote control center 20 through 5G;
the image acquisition module 10 is configured to acquire running state data of the device in the working area at predetermined time intervals;
the remote control center 20 includes:
the storage unit 201 is used for storing an abnormal event database, wherein the abnormal event database is provided with characteristic information of abnormal operation of production equipment and a fault level corresponding to the characteristic information;
the image processing unit 202 is configured to perform segmentation processing on the received image by using an Otsu algorithm and a particle swarm algorithm to obtain a target image in the device image;
the image analysis unit 203 is used for comparing and analyzing the segmented target image with the characteristic information of the production equipment in the abnormal event database when the production equipment operates abnormally, judging whether the production equipment operates in a fault or not, and if so, determining that the production equipment has an abnormal event;
the fault identifying unit 204 is configured to, when an abnormal event exists, obtain a fault level corresponding to the abnormal event, and obtain a corresponding exception handling policy according to the fault level.
In another embodiment of the invention:
FIG. 5 is a schematic structural diagram of a smart factory remote monitoring system based on 5G and algorithm according to a second embodiment of the present invention; as shown in fig. 5, in the intelligent factory remote monitoring system based on 5G and algorithm, the image processing unit 202 specifically includes:
a filtering unit 2021 configured to perform filtering processing on the acquired image;
a calculating unit 2022, configured to determine an optimal segmentation threshold of the image by using Otsu algorithm and particle swarm algorithm;
an output unit 2023, configured to perform threshold segmentation on the image according to the optimal segmentation threshold and output the segmented image.
In another embodiment of the invention:
the intelligent factory remote monitoring system based on 5G and algorithm, the computing unit 2022, includes:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
Specifically, the fitness function of the particle swarm algorithm is the maximum between-class variance.
Further, the encoding the gray-scale value of the image comprises: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
The following describes the processing of the image segmentation unit with a specific applicant's optimization-initiative algorithm:
preferably, the image segmentation unit performs target segmentation on the filtered device image by using an OTSU algorithm to obtain a device area image in the device image, determines an optimal threshold of the OTSU algorithm by using a particle swarm algorithm, and defines a fitness function of the particle swarm algorithm as a maximum inter-class variance, where a larger fitness function value of the particle indicates that a better optimization result of the particle is.
Preferably, the particle swarm algorithm is updated in the following manner:
Va(t+1)=ωa(t)Va(t)+c1r1(Ga(t)-Xa(t))+c2r2(G(t)-Xa(t))
Xa(t+1)=Xa(t)+Va(t+1)
in the formula, Va(t +1) and Xa(t +1) denotes the velocity and position of the particle a at the (t +1) th iteration, Va(t) and Xa(t) velocity and position of the particle a at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, G (t) representing the global optimal solution of the particle population at the t-th iteration, ωa(t) represents the inertial weight factor, G, of particle i at the t-th iterationa(t) represents the local learning solution for particle a at the t-th iteration, and the local learning solution G for particle a at the t-th iterationaThe value of (t) is determined in the following manner:
selecting reference particles for determining a local learning solution from the particle swarm, wherein the selected reference particles mark particles which are not selected as the reference particles in the particle swarm, and the specific steps are as follows:
firstly, selecting the particle with the maximum current fitness function value in the particle swarm as a first reference particle, and marking the first reference particle as b1(t) if m1(t) denotes a reference particle b1(t) the number of particles to be marked, the reference particle b1(t) dividing m nearest to the particle group1(t) marking the particles as 1, wherein the distance is the Euclidean distance between the particles, and continuously selecting the particle with the largest fitness function value as a second reference particle which is marked as b from the particles which are not marked and not selected as reference particles in the particle swarm2(t) if m2(t) denotes a reference particle b2(t) the number of particles to be marked, the reference particle b2(t) dividing m nearest to the particle group2(t) the particles not selected as reference particles are marked as 2, and the selection of reference particles in the particle swarm is continued according to the methodContinuously marking the particles which are not selected as the reference particles in the particle swarm by the selected reference particles, and stopping the selection of the reference particles until the particles in the particle swarm are all marked by the reference particles;
let B (t) denote the set of reference particles selected at the t-th iteration, and
B(t)={bv(t), v ═ 1, 2.., m (b (t)) }, where b isv(t) represents the selected v reference particle, m (B (t)) represents the number of selected reference particles in the t iteration, and m isv(t) denotes a reference particle bv(t) the number of particles to be labeled, then mvThe value of (t) is:
Figure GDA0002955200940000142
wherein M represents an initial value of the number of particles to be marked of a given reference particle, and M < M, M being the number of particles in the population, ηv(t) denotes a reference particle bv(t) a priority of optimization, and
Figure GDA0002955200940000141
wherein f isv(t) denotes a reference particle bv(t) fitness function value, fo(t) denotes a distance reference particle bv(t) fitness function value of the nearest o-th particle at the t-th iteration, fmax(t) represents the maximum fitness function value of the population at the t-th iteration, fmin(t) represents the minimum fitness function value for the population at the t-th iteration;
let B be a set of reference particles marked with particle i in set B (t)i(t), and Bi(t)={bi,c(t),c=1,...,m(Bi(t)) }, in which, bi,c(t) denotes the c-th reference particle marked with particle i in set B (t), m (B)i(t)) represents the number of reference particles marked with particle i in set B (t), and defines χ (b)i,c(t)) represents a reference particle bi,c(t) reference value for particle i, then x (b)i,c(t)) is:
Figure GDA0002955200940000151
in the formula, alpha (f)i(t),fi,c(t)) is a conditional function, fi,c(t) denotes a reference particle bi,c(t) fitness function value, fi(t) represents the fitness function value of particle i at the t-th iteration, when fi(t)<fi,cAt (t), then alpha (f)i(t),fi,c(t)). 1, when fi(t)≥fi,cAt (t), then alpha (f)i(t),fi,c(t))=0,ηi,c(t) denotes a reference particle bi,c(t) priority of optimization, Xi,c(t) denotes a reference particle bi,c(t) position at the t-th iteration, l (X)i(t),Xi,c(t)) represents position Xi(t) and position Xi,c(t) Euclidean distance between;
when particle i is selected as the reference particle at the tth iteration, then the local learning solution G for particle i at the tth iterationi(t)=Besti(t), wherein Besti(t) represents the historical optimal solution of particle i at the tth iteration;
let b be when particle i is not selected as the reference particle at the t-th iterationi,A(t) represents a set BiThe reference particle of (t) having the greatest reference value for particle i, i.e.
Figure GDA0002955200940000152
bi,B(t) represents a set Bi(t) a reference particle having the second largest reference value for particle i, and
Figure GDA0002955200940000153
the local learning solution G of particle i at the t-th iterationiThe expression of (t) is:
Gi(t)=r3Xi,A(t)+(1-r3)Xi,B(t)
in the formula, r3Denotes a random number between 0 and 1, Xi,A(t) denotes a reference particle bi,A(t) position at the t-th iteration, Xi,B(t) denotes a reference particle bi,B(t) position at the tth iteration.
Compared with the traditional updating method of locally learning the particles in the particle swarm to the historical optimal solution thereof, the updating method of the particle swarm in the particle swarm algorithm is improved, in the preferred embodiment, the particles in the particle swarm are locally learned to the local learning solution thereof, and a method of determining the local learning solution of the particles is given, wherein the reference particles are selected from the particle swarm, the selected reference particles mark the part of the particles in the particle swarm which is closer to the selected reference particles, and finally, the two reference particles with the largest reference value are selected from the reference particles marked with the reference particles to determine the corresponding local learning solution thereof, therefore, the more the particles marked with the reference particles are, the more the particles in the particle swarm are likely to be affected, when the more the reference particles are ranked, the larger the fitness function value of the reference particles is indicated in the selected reference particles, the larger the optimization priority value of the reference particles is, the larger the optimization space around the particles is, the larger the fitness function value of the reference particles is, so that the reference particles are distributed more dispersedly while the selection mode of the reference particles provided by the preferred embodiment ensures that the reference particles are larger in fitness function value, and the selection and marking method enables the particles marked by the reference particles to have partially overlapped particles and expand outwards in the solution space marked by the previous reference particle, so that the particles guide the particle swarm to evolve towards the dispersion distance according to the local learning solution determined by the reference particles marked by the particles, and the defect that the particles are easy to fall into a local extreme value in the optimization process is avoided; defining a reference value of a reference particle to a particle, wherein the fitness function value and the optimization priority in the reference value ensure that the reference particle has a larger fitness function value and a larger optimization space, and the distance parameter in the reference value ensures the locality between the reference particle and the particle, selecting the reference particle with the largest and the second largest reference values from the reference particles marked with the particle to determine a local learning solution of the particle, so that the local learning solution has the larger fitness function value and the larger optimization space, thereby guiding the particle to evolve towards a more optimal direction in the local learning process and avoiding the particle from being trapped in a local optimal solution, enhancing the local optimization capability of the particle, further improving the optimization precision of the particle swarm, and ensuring that the optimal threshold of the OTSU algorithm determined according to the particle swarm algorithm has higher accuracy, thereby improving the accuracy of image segmentation.
The present preferred embodiment is configured to determine whether the production equipment is in operation failure according to the device region image obtained by dividing, and preferably, the maximum value of the gray level values of the pixels in the device region image is compared with a given gray level safety threshold value, in consideration of that the operation failure of the production equipment is often represented by a temperature increase.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. Intelligent factory remote monitoring method based on 5G and algorithm is characterized in that: the method comprises the following steps:
a plurality of working areas are arranged in the factory, an image acquisition module is arranged in each working area, and the image acquisition module acquires the running state data of equipment in the working area at preset intervals;
segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image;
comparing and analyzing the segmented target image with characteristic information of abnormal operation of the production equipment in an abnormal event database, judging whether the production equipment has an operation fault, and if so, determining that the production equipment has an abnormal event;
when an abnormal event exists, acquiring a fault level corresponding to the abnormal event, and acquiring a corresponding abnormal processing strategy according to the fault level;
wherein: the abnormal event database stores characteristic information of abnormal operation of the production equipment and fault levels corresponding to the characteristic information;
the segmentation processing is performed on the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image, and the method specifically comprises the following steps:
filtering the acquired image;
determining an optimal segmentation threshold value of the equipment area image by adopting an Otsu algorithm and a particle swarm algorithm;
performing threshold segmentation on the image according to the optimal segmentation threshold and outputting the segmented image;
the particle swarm algorithm is updated in the following way:
Va(t+1)=ωa(t)Va(t)+c1r1(Ga(t)-Xa(t))+c2r2(G(t)-Xa(t))
Xa(t+1)=Xa(t)+Va(t+1)
in the formula, Va(t +1) and Xa(t +1) denotes the velocity and position of the particle a at the (t +1) th iteration, Va(t) and Xa(t) velocity and position of the particle a at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, G (t) representing the global optimal solution of the particle population at the t-th iteration, ωa(t) represents the inertial weight factor, G, of particle i at the t-th iterationa(t) represents the local learning solution for particle a at the t-th iteration, and the local learning solution G for particle a at the t-th iterationaThe value of (t) is determined in the following manner:
selecting reference particles for determining a local learning solution from the particle swarm, wherein the selected reference particles mark particles which are not selected as the reference particles in the particle swarm, and the specific steps are as follows:
firstly, selecting the particle with the maximum current fitness function value in the particle swarm as a first reference particle, and marking the first reference particle as b1(t) if m1(t) denotes a reference particle b1(t) the number of particles to be marked, the reference particle b1(t) dividing m nearest to the particle group1(t) marking the particles as 1, wherein the distance is the Euclidean distance between the particles, and continuously selecting the particle with the largest fitness function value as a second reference particle which is marked as b from the particles which are not marked and not selected as reference particles in the particle swarm2(t) if m2(t) denotes a reference particle b2(t) the number of particles to be marked, the reference particle b2(t) dividing m nearest to the particle group2(t) marking the particles which are not selected as the reference particles as 2, continuously selecting the reference particles from the particle swarm according to the method, continuously marking the particles which are not selected as the reference particles in the particle swarm by the selected reference particles, and stopping the selection of the reference particles until the particles in the particle swarm are all marked by the reference particles;
let B (t) denote the reference set of particles selected at the t-th iteration, and B (t) { b }v(t), v ═ 1, 2.., m (b (t)) }, where b isv(t) represents the selected v reference particle, m (B (t)) represents the number of selected reference particles in the t iteration, and m isv(t) denotes a reference particle bv(t) the number of particles to be labeled, then mvThe value of (t) is:
Figure FDA0002955200930000022
wherein M represents an initial value of the number of particles to be marked of a given reference particle, and M < M, M being the number of particles in the population, ηv(t) denotes a reference particle bv(t) a priority of optimization, and
Figure FDA0002955200930000021
wherein f isv(t) denotes a reference particle bv(t) fitness function value, fo(t) denotes a distance reference particle bv(t) fitness function value of the nearest o-th particle at the t-th iteration, fmax(t) represents the maximum fitness function value of the population at the t-th iteration, fmin(t) represents the minimum fitness function value for the population at the t-th iteration;
let B be a set of reference particles marked with particle i in set B (t)i(t), and Bi(t)={bi,c(t),c=1,...,m(Bi(t)) }, in which, bi,c(t) denotes the c-th reference particle marked with particle i in set B (t), m (B)i(t)) represents the number of reference particles marked with particle i in set B (t), and defines χ (b)i,c(t)) represents a reference particle bi,c(t) reference value for particle i, then χ (b)i,c(t)) is:
Figure FDA0002955200930000031
in the formula, alpha (f)i(t),fi,c(t)) is a conditional function, fi,c(t) denotes a reference particle bi,c(t) fitness function value, fi(t) represents the fitness function value of particle i at the t-th iteration, when fi(t)<fi,cAt (t), then alpha (f)i(t),fi,c(t)). 1, when fi(t)≥fi,cAt (t), then alpha (f)i(t),fi,c(t))=0,ηi,c(t) denotes a reference particle bi,c(t) priority of optimization, Xi,c(t) denotes a reference particle bi,c(t) position at the t-th iteration, l (X)i(t),Xi,c(t)) represents position Xi(t) and position Xi,c(t) Euclidean distance between;
when particle i is selected as the reference particle at the tth iteration, then the local learning solution G for particle i at the tth iterationi(t)=Besti(t), wherein Besti(t) represents the historical optimal solution of particle i at the tth iteration;
Let b be when particle i is not selected as the reference particle at the t-th iterationi,A(t) represents a set BiThe reference particle of (t) having the greatest reference value for particle i, i.e.
Figure FDA0002955200930000032
bi,B(t) represents a set Bi(t) a reference particle having the second largest reference value for particle i, and
Figure FDA0002955200930000033
the local learning solution G of particle i at the t-th iterationiThe expression of (t) is:
Gi(t)=r3Xi,A(t)+(1-r3)Xi,B(t)
in the formula, r3Denotes a random number between 0 and 1, Xi,A(t) denotes a reference particle bi,A(t) position at the t-th iteration, Xi,B(t) denotes a reference particle bi,B(t) position at the tth iteration.
2. The intelligent factory remote monitoring method based on 5G and algorithm as claimed in claim 1, wherein: the method for determining the optimal segmentation threshold of the equipment region image by adopting the Otsu algorithm and the particle swarm algorithm specifically comprises the following steps:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
3. The intelligent factory remote monitoring method based on 5G and algorithm as claimed in claim 2, wherein: and the fitness function of the particle swarm algorithm is the maximum between-class variance.
4. The intelligent factory remote monitoring method based on 5G and algorithm as claimed in claim 3, wherein: the encoding of the gray values of the image comprises: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
5. Wisdom mill remote monitering system based on 5G and algorithm, its characterized in that: a plurality of working areas are arranged in the factory, an image acquisition module (10) is arranged in each working area, and the image acquisition modules (10) are in 5G communication connection with a remote control center (20);
the image acquisition module (10) is used for acquiring running state data of equipment in the working area at preset time intervals;
the remote control center (20) comprises:
the storage unit (201) is used for storing an abnormal event database, and the abnormal event database is provided with characteristic information of abnormal operation of production equipment and fault grades corresponding to the characteristic information;
the image processing unit (202) is used for segmenting the received image by adopting an Otsu algorithm and a particle swarm algorithm to obtain a target image in the equipment image;
the image analysis unit (203) is used for comparing and analyzing the segmented target image with the characteristic information of the production equipment in the abnormal event database when the production equipment operates abnormally, judging whether the production equipment operates in a fault or not, and if so, determining that the production equipment has an abnormal event;
the fault identification unit (204) is used for acquiring a fault level corresponding to an abnormal event when the abnormal event exists, and acquiring a corresponding abnormal processing strategy according to the fault level;
the image processing unit (202) specifically comprises:
a filtering unit (2021) for performing filtering processing on the acquired image;
a computing unit (2022) for determining an optimal segmentation threshold of the image using Otsu algorithm and particle swarm algorithm;
an output unit (2023) for performing threshold segmentation on the image according to the optimal segmentation threshold and outputting the segmented image;
the particle swarm algorithm is updated in the following way:
Va(t+1)=ωa(t)Va(t)+c1r1(Ga(t)-Xa(t))+c2r2(G(t)-Xa(t))
Xa(t+1)=Xa(t)+Va(t+1)
in the formula, Va(t +1) and Xa(t +1) denotes the velocity and position of the particle a at the (t +1) th iteration, Va(t) and Xa(t) velocity and position of the particle a at the t-th iteration, c1And c2Represents a learning factor, r1And r2Representing a random number between 0 and 1, G (t) representing the global optimal solution of the particle population at the t-th iteration, ωa(t) represents the inertial weight factor, G, of particle i at the t-th iterationa(t) represents the local learning solution for particle a at the t-th iteration, and the local learning solution G for particle a at the t-th iterationaThe value of (t) is determined in the following manner:
selecting reference particles for determining a local learning solution from the particle swarm, wherein the selected reference particles mark particles which are not selected as the reference particles in the particle swarm, and the specific steps are as follows:
firstly, selecting the particle with the maximum current fitness function value in the particle swarm as a first reference particle, and marking the first reference particle as b1(t) if m1(t) denotes a reference particle b1(t) the number of particles to be marked, the reference particle b1(t) dividing m nearest to the particle group1(t) the number of particles is marked 1, and the distance is Euclidean distance between the particlesAnd (c) continuously selecting the particle with the largest fitness function value as a second reference particle, which is marked as b, from the particles which are not marked and not selected as the reference particles in the particle swarm2(t) if m2(t) denotes a reference particle b2(t) the number of particles to be marked, the reference particle b2(t) dividing m nearest to the particle group2(t) marking the particles which are not selected as the reference particles as 2, continuously selecting the reference particles from the particle swarm according to the method, continuously marking the particles which are not selected as the reference particles in the particle swarm by the selected reference particles, and stopping the selection of the reference particles until the particles in the particle swarm are all marked by the reference particles;
let B (t) denote the reference set of particles selected at the t-th iteration, and B (t) { b }v(t), v ═ 1, 2.., m (b (t)) }, where b isv(t) represents the selected v reference particle, m (B (t)) represents the number of selected reference particles in the t iteration, and m isv(t) denotes a reference particle bv(t) the number of particles to be labeled, then mvThe value of (t) is:
Figure FDA0002955200930000062
wherein M represents an initial value of the number of particles to be marked of a given reference particle, and M < M, M being the number of particles in the population, ηv(t) denotes a reference particle bv(t) a priority of optimization, and
Figure FDA0002955200930000061
wherein f isv(t) denotes a reference particle bv(t) fitness function value, fo(t) denotes a distance reference particle bv(t) fitness function value of the nearest o-th particle at the t-th iteration, fmax(t) represents the maximum fitness function value of the population at the t-th iteration, fmin(t) represents the minimum fitness function value for the population at the t-th iteration;
let the reference particles in set B (t) be marked by particle iIs collected as Bi(t), and Bi(t)={bi,c(t),c=1,...,m(Bi(t)) }, in which, bi,c(t) denotes the c-th reference particle marked with particle i in set B (t), m (B)i(t)) represents the number of reference particles marked with particle i in set B (t), and defines x (b)i,c(t)) represents a reference particle bi,c(t) reference value for particle i, then χ (b)i,c(t)) is:
Figure FDA0002955200930000071
in the formula, alpha (f)i(t),fi,c(t)) is a conditional function, fi,c(t) denotes a reference particle bi,c(t) fitness function value, fi(t) represents the fitness function value of particle i at the t-th iteration, when fi(t)<fi,cAt (t), then alpha (f)i(t),fi,c(t)). 1, when fi(t)≥fi,cAt (t), then alpha (f)i(t),fi,c(t))=0,ηi,c(t) denotes a reference particle bi,c(t) priority of optimization, Xi,c(t) denotes a reference particle bi,c(t) position at the t-th iteration, l (X)i(t),Xi,c(t)) represents position Xi(t) and position Xi,c(t) Euclidean distance between;
when particle i is selected as the reference particle at the tth iteration, then the local learning solution G for particle i at the tth iterationi(t)=Besti(t), wherein Besti(t) represents the historical optimal solution of particle i at the tth iteration;
when the particle i is not selected as the reference particle in the t-th iteration, let bi,A(t) represents a set BiThe reference particle of (t) having the greatest reference value for particle i, i.e.
Figure FDA0002955200930000072
bi,B(t) represents a set BiIn (t) forParticle i has the second largest reference value of the reference particles, an
Figure FDA0002955200930000073
The local learning solution G of particle i at the t-th iterationiThe expression of (t) is:
Gi(t)=r3Xi,A(t)+(1-r3)Xi,B(t)
in the formula, r3Denotes a random number between 0 and 1, Xi,A(t) denotes a reference particle bi,A(t) position at the t-th iteration, Xi,B(t) denotes a reference particle bi,B(t) position at the tth iteration.
6. The intelligent factory remote monitoring system based on 5G and algorithm of claim 5, wherein: the calculation unit (2022) specifically comprises:
carrying out contrast expansion transformation preprocessing on an input image, and then calculating a gray level histogram of the image;
coding the gray value of the image, and randomly generating M initial populations;
calculating a fitness function value of the position of each particle according to an Otsu algorithm and a set fitness function;
updating the global optimal position of each particle according to the calculated fitness function value of the position of the particle, and continuously updating the speed and the position of each individual in the population in the continuous iteration process;
and (4) finishing optimizing when the conditions are met, outputting a global optimal solution, and finally searching the global optimal solution which is the most optimal segmentation threshold value by the particle swarm.
7. The intelligent factory remote monitoring system based on 5G and algorithm of claim 6, wherein: the fitness function of the particle swarm optimization is the maximum inter-class variance, and the encoding of the gray value of the image comprises the following steps: image gray levels ranging from 0-255 are encoded with an 8-bit binary code string.
CN202011126971.2A 2020-10-20 2020-10-20 Intelligent factory remote monitoring method and system based on 5G sum algorithm Active CN112232235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011126971.2A CN112232235B (en) 2020-10-20 2020-10-20 Intelligent factory remote monitoring method and system based on 5G sum algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011126971.2A CN112232235B (en) 2020-10-20 2020-10-20 Intelligent factory remote monitoring method and system based on 5G sum algorithm

Publications (2)

Publication Number Publication Date
CN112232235A CN112232235A (en) 2021-01-15
CN112232235B true CN112232235B (en) 2021-06-04

Family

ID=74119146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011126971.2A Active CN112232235B (en) 2020-10-20 2020-10-20 Intelligent factory remote monitoring method and system based on 5G sum algorithm

Country Status (1)

Country Link
CN (1) CN112232235B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115122155B (en) * 2022-08-31 2022-11-22 深圳市玄羽科技有限公司 Machine tool remote diagnosis method and system based on industrial internet big data
CN117251698B (en) * 2023-11-17 2024-01-19 北京德众国良环保科技有限公司 Block chain-based on-line monitoring method and system for purifying cooking fume in catering industry

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507488A (en) * 2020-04-14 2020-08-07 洋浦美诺安电子科技有限责任公司 VR-based vehicle maintenance auxiliary system
CN111598620A (en) * 2020-05-20 2020-08-28 上海贤合汽车信息咨询服务有限公司 Remote accurate pricing system for second-hand vehicles

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104569666A (en) * 2014-12-25 2015-04-29 重庆大学 Power transformer fault prediction method based on electricity-graph model
CN107492103A (en) * 2017-07-05 2017-12-19 上海斐讯数据通信技术有限公司 Gray threshold acquisition methods, image partition method based on APSO algorithm
CN110697373B (en) * 2019-07-31 2021-04-06 湖北凯瑞知行智能装备有限公司 Conveying belt deviation fault detection method based on image recognition technology
CN111161300B (en) * 2019-12-05 2023-03-21 西安工程大学 Niblack image segmentation method based on improved Otsu method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507488A (en) * 2020-04-14 2020-08-07 洋浦美诺安电子科技有限责任公司 VR-based vehicle maintenance auxiliary system
CN111598620A (en) * 2020-05-20 2020-08-28 上海贤合汽车信息咨询服务有限公司 Remote accurate pricing system for second-hand vehicles

Also Published As

Publication number Publication date
CN112232235A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN112287816A (en) Dangerous working area accident automatic detection and alarm method based on deep learning
CN112232235B (en) Intelligent factory remote monitoring method and system based on 5G sum algorithm
CN111582129A (en) Real-time monitoring and alarming method and device for working state of shield machine driver
CN111046583A (en) Switch machine fault diagnosis method based on DTW algorithm and ResNet network
CN115527340A (en) Intelligent construction site safety monitoring system and method based on unmanned aerial vehicle and surveillance camera
CN107222726B (en) Early warning method for preventing external damage of electric power facility
CN116629465B (en) Smart power grids video monitoring and risk prediction response system
CN112800975A (en) Behavior identification method in security check channel based on image processing
CN210666820U (en) Pedestrian abnormal behavior detection system based on DSP edge calculation
CN113706523A (en) Method for monitoring belt deviation and abnormal operation state based on artificial intelligence technology
CN117351271A (en) Fault monitoring method and system for high-voltage distribution line monitoring equipment and storage medium thereof
CN117197713A (en) Extraction method based on digital video monitoring system
CN113569656B (en) Examination room monitoring method based on deep learning
Zheng et al. Anomaly detection of metro station tracks based on sequential updatable anomaly detection framework
CN113179389A (en) System and method for identifying crane jib of power transmission line dangerous vehicle
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN116224123A (en) Battery abnormality detection system, battery abnormality detection method, storage medium, and battery replacement station
CN115311625A (en) Monitoring method for judging whether target contacts power transmission line
CN114942364A (en) Substation fault diagnosis method and system based on knowledge graph technology
CN113657314A (en) Method and system for recognizing dynamic and static unsafe behaviors in industrial environment
CN115082865B (en) Bridge crane intrusion dangerous behavior early warning method and system based on visual image recognition
CN104168473B (en) A kind of detection method of shear wave interference of video image and device
CN117688464B (en) Hidden danger analysis method and system based on multi-source sensor data
Yuan et al. A constructing vehicle intrusion detection algorithm based on BOW presentation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210514

Address after: 037000 No.3, unit 1, building 1, Xinsheng street, mining area, Datong City, Shanxi Province

Applicant after: Xu Xiangfeng

Applicant after: Liao Liufa

Address before: 343000 No.106, Beimen Road, Fanrong street, Hechuan Town, Yongxin County, Ji'an City, Jiangxi Province

Applicant before: Luo Ziyao

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210615

Address after: 200333 room 312, building 17, Lane 879, Zhongjiang Road, Putuo District, Shanghai

Patentee after: SHANGHAI LONGTIAN DIGITAL TECHNOLOGY Co.,Ltd.

Address before: 037000 No.3, unit 1, building 1, Xinsheng street, mining area, Datong City, Shanxi Province

Patentee before: Xu Xiangfeng

Patentee before: Liao Liufa

OL01 Intention to license declared