CN112422818A - Intelligent screen dropping remote detection method based on multivariate image fusion - Google Patents

Intelligent screen dropping remote detection method based on multivariate image fusion Download PDF

Info

Publication number
CN112422818A
CN112422818A CN202011189819.9A CN202011189819A CN112422818A CN 112422818 A CN112422818 A CN 112422818A CN 202011189819 A CN202011189819 A CN 202011189819A CN 112422818 A CN112422818 A CN 112422818A
Authority
CN
China
Prior art keywords
depth
image
camera
sieve plate
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011189819.9A
Other languages
Chinese (zh)
Other versions
CN112422818B (en
Inventor
彭晨
郑伟
杨明锦
王海宽
王玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN202011189819.9A priority Critical patent/CN112422818B/en
Publication of CN112422818A publication Critical patent/CN112422818A/en
Application granted granted Critical
Publication of CN112422818B publication Critical patent/CN112422818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • H04L67/025Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent screen dropping remote detection method based on multivariate image fusion, which comprises the following steps: binding a depth camera and a color camera, and adjusting an ideal picture according to a working environment; building a local area network to realize wireless communication between the depth camera and the industrial personal computer; building a raspberry group video server under a wireless local area network, and finally realizing wireless transmission between a color camera and an industrial personal computer; picture calibration, so that the pictures of the depth camera and the color camera are unified; the method comprises the steps of preprocessing depth data, and converting depth information into a three-channel BGR pseudo color image; identifying and positioning the sieve plate, and transmitting the coordinates of the sieve plate to a depth map; and acquiring depth information of the coordinate area by using a depth camera, judging whether the sieve plate falls off according to a change threshold of the depth value, and giving an alarm and a record. The invention can detect the failure of the sieve plate falling in real time without depending on manpower, and improves the economic benefit and the intelligent level of the coal preparation plant.

Description

Intelligent screen dropping remote detection method based on multivariate image fusion
Technical Field
The invention relates to the field of fault diagnosis of large mechanical equipment of an industrial automatic production line, in particular to an intelligent screen dropping remote detection method based on multi-element image fusion.
Background
In industrial fields such as coal mines, metallurgy and energy sources, a medium removing sieve is generally used for realizing fine treatment of products. When the medium removing sieve works, the sieve plates are uniformly distributed on the surface of the medium removing sieve, and materials on the sieve are sorted in a vibration mode.
The sieve plate vibration frequency is high, and the sieve itself only relies on buckle and sieve machine to be fixed, when taking off the violent vibration of medium sieve, can lead to the sieve to take place the offset and drop even, if can not in time discover, a large amount of materials get into the undersize pipeline on the sieve, cause and take off production accidents such as medium system pipeline jam, and then cause whole medium system paralysis that takes off, lead to the production stagnation.
At present, no online monitoring device is used for diagnosing the fault in production, and an enterprise adopts a traditional manual inspection method, so that the manual inspection has strong hysteresis, and the severity of the problem is aggravated.
The Shendong coal group adopts a device for monitoring the state of undersize materials in real time, the device can indirectly detect the faults of a sieve plate, but the mechanical structure is more complex, and the detection effect has certain hysteresis; the detection mode needs to design a relatively complex circuit, and the risks of electric leakage and electric shock exist in a severe field environment with the screen surface continuously spraying water and dust.
The Shanxi smokeless coal mining industry group adopts the high polymer polyurethane combined sieve plate to reform the sieve plate of the original vibrating sieve, so that the wear resistance of the sieve plate is greatly improved, and the service life of the sieve plate is six times of that of the original punched sieve plate; however, this solution only improves the economy of use, and screen deck failure is still unavoidable and cannot be detected in real time.
Aiming at the problems of hysteresis, non-real-time property and the like of the existing sieve plate falling fault detection, the design of a set of sieve falling remote detection system is particularly important by combining a complex industrial field environment.
Disclosure of Invention
The invention aims to provide an intelligent screen dropping remote detection method based on multivariate image fusion, and aims to solve the problems of strong detection hysteresis, low real-time property and the like in the prior art.
In order to achieve the above object, the idea of the present invention is:
a remote detection method for intelligent screen dropping based on multi-element image fusion comprises the steps that a depth camera is used for obtaining a screen plate depth image, the screen plate depth image is accessed to a local area network through an Ethernet port, meanwhile, a color camera is used for obtaining a color image, and the color image is accessed to the local area network through a wireless network card of a video server; secondly, the industrial personal computer wirelessly obtains a depth image and a color image through the local area network, identifies and positions the sieve plate by using the color image, and simultaneously transmits the position of the sieve plate to the depth image; then, acquiring depth information of a sieve plate in a positioning area by using a depth camera, and monitoring whether the sieve plate falls or shifts in real time; and finally, when the sieve plate is detected to fall off, an alarm signal is sent to the staff in time, and meanwhile, the working log of the system is updated. The key point of the invention is that the color camera is used for identifying the sieve plate and sending the coordinate position of the sieve plate, the depth camera is used for obtaining the depth information of the position area, whether the sieve plate has a falling fault is judged according to the change of the depth value, and meanwhile, all the devices are positioned under the same local area network, thus realizing the wireless communication among the devices.
The method uses two cameras, one is an uvc drive-free color industrial camera, and because the surface of the sieve plate is full of materials and the position information of the sieve plate cannot be accurately obtained when the medium removing sieve is in normal operation, the sieve plate is identified and positioned before the materials are put in, the edge part of the sieve plate is made of red polyurethane materials and is a regular rectangular frame, the identification and positioning of the sieve plate can be realized by utilizing color characteristics, and the uvc drive-free color industrial camera meets complex industrial field conditions and is suitable for being used as an image acquisition device; the other type is a depth camera based on a time of flight TOF, a model of the sieve plate in normal operation and failure is established by taking depth information between an object and the depth camera as a main characteristic basis, the depth information of the sieve plate in two working conditions is analyzed, and a change threshold value is set according to the height of the chute, so that the working state of the sieve plate is judged.
The color camera and the depth camera are bound together and vertically hung over the medium removing screen system, lenses of the two cameras face the screen surface, real-time picture information of the screen plate is collected, and the size and the proportion of pictures shot by the cameras are adjusted to enable the screen plate to be located in the pictures. Meanwhile, the two cameras and the industrial personal computer are positioned under the same local area network, and wireless communication between the devices is guaranteed.
According to the inventive concept, the invention adopts the following technical scheme:
an intelligent screen dropping remote detection method based on multivariate image fusion comprises the following operation steps:
step 1, binding a depth camera and a color camera, and determining the ideal height and the number of the cameras according to the range of a sieve plate shot by the two cameras and the signal intensity;
step 2, building a wireless local area network to realize wireless communication between the depth camera and the industrial personal computer;
step 3, a raspberry group video server under a wireless local area network is set up, and wireless communication between the color camera and the industrial personal computer is achieved;
step 4, calibrating the picture: calibrating the pictures of the depth camera and the color camera to make the pictures uniform;
step 5, depth data preprocessing: filtering interference signals, and converting depth information into a pseudo color image;
step 6, recognizing and positioning the sieve plate by using a color camera, and determining the coordinates of the sieve plate;
and 7, acquiring depth information of the positioning area by using a depth camera, setting a change threshold of the depth information, judging whether the sieve plate breaks down, and making a record and an alarm prompt.
Preferably, the intelligent screen dropping remote detection method based on the multi-element image fusion realizes the positioning and diagnosis of the screen plate dropping fault and comprises the following steps:
step 1: binding a depth camera and a color camera, hanging a group of cameras above a medium removing screen system, and determining the number of the required cameras according to the screen plate range shot by a camera lens and the signal intensity;
step 2: the method comprises the steps that wireless communication between an industrial personal computer and a depth camera is achieved, the depth camera is connected with a wireless router through a network cable, and wireless communication with the industrial personal computer is achieved in a WiFi + Socket mode;
and step 3: build up raspberry group video server under the wireless local area network, realize the wireless communication between color camera and the industrial computer: a video server is built through a raspberry group to realize that the images of the color cameras are transmitted to an industrial personal computer in a streaming mode, and finally wireless transmission between the color cameras and the industrial personal computer is realized;
and 4, step 4: and (3) frame calibration: calibrating the received picture by combining physical alignment and software alignment; firstly, the bound depth camera and the color camera are directly opposite to the same picture, the real-time picture of the sieve plate is shot, and rough physical alignment is carried out; secondly, capturing a shot picture in proportion, and finely adjusting a corresponding position to finally achieve the effect of picture alignment;
and 5: depth data preprocessing: filtering pixels with too low or too high signal intensity in the acquired depth data; converting the depth information into a three-channel BGR pseudo color image;
step 6: and identifying and positioning the sieve plate, identifying the sieve plate in the image through an image processing algorithm after the system receives the color image, finding out pixel coordinates of four vertexes of the sieve plate, and then transmitting the vertex coordinates of the sieve plate to the depth map.
And 7: and (3) detecting the falling of the sieve plate, wherein the depth map continuously detects the depth value in the coordinate area after receiving the sieve plate coordinate transmitted from the color map, obtains the distance information between the sieve plate and the camera, judges whether the sieve plate has a falling fault according to a threshold value set by the actual distance between the sieve plate and the chute, and records all state information of the sieve plate in a log.
Preferably, the specific steps of step 1 are:
step 1.1: binding a depth camera and a color camera, and placing the depth camera and the color camera in a working height range by combining the number of screen plates shot by a picture and the signal intensity of the picture; no screen plate is arranged at the edge of the picture, so that the picture calibration in the step 3 is convenient; when the number of over-exposure points in the depth image is large, the height of the camera is raised or the integration time of the depth camera is reduced to make up for the fact that the depth camera emits more light to the surface of the material;
step 1.2: installing a camera, and respectively adjusting the focal lengths of the depth camera and the color camera to clearly display the screen plate as far as possible;
step 1.3: finally determining the number of required equipment according to the steps so as to cover the whole medium removing screen surface;
preferably, the specific steps of step 2 are:
step 2.1: the communication between an industrial personal computer and a depth camera under the same local area network is realized, and Socket communication is adopted between the industrial personal computer and the depth camera; the method comprises the steps that an industrial personal computer sends a connection request specification to a depth camera, the instruction completes the request of the industrial personal computer for sending an instruction to the depth camera, and simultaneously informs the depth camera of preparing connection, and the depth camera is initialized after receiving the instruction and mainly comprises an overexposure point enabling instruction and an integration time enabling instruction;
step 2.2: after the depth camera is initialized, a depth data acquisition instruction, a signal intensity instruction and a bottom chip temperature instruction are sent to the industrial personal computer, so that the industrial personal computer can normally acquire the data of the depth camera;
step 2.3: the method comprises the steps that after an industrial personal computer receives an instruction of a depth camera, a depth data acquisition instruction is sent, the depth camera starts to acquire data at the moment, and the acquired data are sent to the industrial personal computer in a streaming mode; the industrial personal computer stores the original data, analyzes the original data into a two-dimensional array and finally completes three-way handshake to realize communication;
preferably, the specific steps of step 3 are:
step 3.1: a main thread of a video server initializes a static global structure global, specifically, input and output of each component are defined at first, description is carried out by taking a functional module as a unit, and a link relation is established among the components;
step 3.2: the video server collects an image thread, namely a cam _ thread, wherein the thread comprises a loop, and the loop can continue forever as long as a user does not press Ctrl + C and a stop bit in global variable global is still 0; the method comprises the following steps that a grab function uvcGrab waits for image data in a blocking mode, after a frame of video is captured and processed, a function memcpy _ picture is used for copying an image to a global buffer, and a function pthread _ cond _ broadcast is used for informing all client threads which block and wait for the data;
step 3.3: the video server image output thread, namely a server _ thread, comprises a loop like the cam _ thread, and the loop can continue forever as long as the user does not press Ctrl + C and the stop bit in the global variable global is still 0; a server _ thread runs a TCP server, the TCP server thread is responsible for monitoring client requests, and once a request exists, a new client processing thread is created to be specially responsible for HTTP requests from the client; therefore, the TCP server can monitor the client requests all the time, and one client processing thread can only respond to the request of one IP address, so that the concurrency of the video server is improved;
step 3.4: the client processing thread blocks and waits for image updating in the global buffer global in a mode of mutual exclusion lock and condition variable, once the video acquisition thread updates the image in the global buffer global, the client processing thread unblocks and locks the global buffer, copies the image frame in the global buffer to the sending buffer, and sends the image frame to the client through sprintf function;
preferably, the specific steps of step 4 are:
step 4.1: after the depth camera and the color camera are bound together, the shot pictures are kept consistent in the horizontal direction and the vertical direction;
step 4.2: after the step 4.1, the shot screen plate pictures are still difficult to keep coincident, the pictures shot by the two cameras are subjected to medium-scale scaling in the frame through fixedly detecting the size of the picture frame, and the positions of the pictures are moved, so that the effect of coincidence of the pictures of the depth camera and the color camera is finally realized;
preferably, the specific steps of step 5 are:
step 5.1: aiming at a complex industrial environment, more interference signals are generated on site, the original depth data may contain invalid data, the distance information between a lens and a screen plate is in a certain range, data higher or lower than the range is filtered correspondingly, the data smaller than the minimum depth is set as a minimum value, and the data larger than the maximum depth is set as a maximum value;
step 5.2: in an industrial field, depth data of adjacent areas generally cannot jump but shows a slowly changing state, except for invalid interference signals, random interference signals may exist around pixel points, and in order to reduce the interference, mean value filtering is used for filtering an image, so that pulse interference caused by accidental factors is overcome; the step is combined with the step 5.1, the whole processing method is equivalent to an 'amplitude limiting filtering method' + 'average filtering method', new data sampled each time are firstly subjected to amplitude limiting processing and then sent to a queue for average filtering processing, the advantages of the two filtering methods are combined, the sampling value deviation caused by accidental pulse interference is eliminated, and the formula of the average filtering is (1):
Figure BDA0002752420750000051
wherein f isdepth(x,y)、gdepth(x, y) respectively representing the depth values of the current pixel points (x, y), wherein m is the total number of the pixel points in the template;
step 5.3: converting the depth information into a 16-bit single-channel image, wherein the size of the image is 320 x 240, and the value of a pixel point represents the depth information of the image; then converting the 16-bit image into a gray scale image of 8-bit single channel;
step 5.4: for better image display effect, the image of a single channel is converted into an 8-bit BGR three-channel image. The conversion rule is: the red channel, the green channel and the blue channel are the same as the depth value of the corresponding pixel point in the gray-scale image. Thus, the depth information is retained on the pseudo-color image to the maximum extent, and the depth data of the object can be directly obtained from the pseudo-color image. At this time, the CV _8UC3 image still retains depth information, and the conversion formula is:
R(i,j)=G(i,j)=B(i,j)=f[ggray(i,j)] (2)
wherein R (i, j), G (i, j), B (i, j) represent the values of red, green, blue channels at i row and j column, respectively, f [ G [ [ G ]gray(i,j)]Converting the CV _8UC1 gray-scale image into a CV _8UC3 pseudo-color image;
preferably, the specific steps of step 6 are:
step 6.1: firstly, received color pictures are binarized into an image A, and then the image A is expanded, so that a white part is expanded, subsequent contour searching is facilitated, and the formula of image expansion is as follows:
Figure BDA0002752420750000052
the formula represents the dilation of a set A with a set B, A and B being a two-dimensional integer space Z2The set of (a) to (b),
Figure BDA0002752420750000053
the reflection of the set B is performed, the displacement of the set B is z, convolution operation is performed on the set B and the set A, pixel points in the set A are scanned, and the set elements and binary image elements are used for performing AND operation, if the set elements and the binary image elements are all 0, the target pixel points are 0, otherwise, the target pixel points are 1, so that the maximum value of the pixel points in the set area of the set B is calculated, the pixel values of the reference points are replaced by the maximum value to realize expansion and dilationObtaining an image C after expansion;
step 6.2: searching the contour in the image C, approximating the found contour by using a polygon, and finding a quadrangle from the approximated polygon; detecting the proportion of red in the quadrangle, setting the proportion of red to be more than 15%, determining the quadrangle as a sieve plate, determining the position of the sieve plate when the position of the sieve plate does not move for more than 3 seconds, drawing the outline of the sieve plate in a color image, simultaneously recording coordinates of four vertexes of the sieve plate, drawing the outline of the sieve plate in a depth camera image, and changing the color of the area of the sieve plate corresponding to an interface of an industrial personal computer from gray to green to represent that the sieve plate is ready;
preferably, the specific steps of step 7 are:
step 7.1: according to the step 6.2, the position information of the sieve plate in the depth camera image and the color camera image is obtained at the moment, the depth value in the position area is continuously obtained by the depth camera, when the change range of the depth value is larger than 2, the sieve plate is determined to have a falling fault, and the rest conditions are determined to be in a normal working state;
step 7.2: when the screen plate falls off, the outline of the screen plate in the picture of the depth camera is changed from black to red, the color of the corresponding screen plate area in the interface of the industrial personal computer is changed from green to red, meanwhile, an alarm sound is sent out to remind a worker to process, and the state information is recorded in a log.
Compared with the prior art, the invention has the following obvious prominent substantive characteristics and remarkable advantages:
1. the key point of the invention is that the color camera is used for identifying the sieve plate and sending the coordinate position of the sieve plate, the depth camera is used for obtaining the depth information of the position area, whether the sieve plate has a falling fault is judged according to the change of the depth value, and meanwhile, all the devices are positioned under the same local area network, thus realizing the wireless communication between the devices;
2. the invention can detect the fault of the sieve plate falling in real time, does not depend on manpower, and improves the economic benefit and the intelligent level of the coal preparation plant.
Drawings
FIG. 1 is a general flow diagram of the present invention.
Fig. 2 is a screen panel expansion diagram and screen panel identification diagram of the present invention. Fig. 2(a) is a diagram showing the expansion of the sieve plate in the present invention, and fig. 2(b) is a diagram showing the identification of the sieve plate in the present invention.
Fig. 3 is an interface diagram of the screen plate of the present invention in normal operation.
Fig. 4 is an interface diagram when the screen plate is out of order in the present invention.
Detailed Description
The invention is described in further detail below with reference to figures 1-4.
Example one
Referring to fig. 1, an intelligent screen dropping remote detection method based on multivariate image fusion comprises the following operation steps:
step 1, binding a depth camera and a color camera, and determining the ideal height and the number of the cameras according to the range of a sieve plate shot by the two cameras and the signal intensity;
step 2, building a wireless local area network to realize wireless communication between the depth camera and the industrial personal computer;
step 3, a raspberry group video server under a wireless local area network is set up, and wireless communication between the color camera and the industrial personal computer is achieved;
step 4, calibrating the picture: calibrating the pictures of the depth camera and the color camera to make the pictures uniform;
step 5, depth data preprocessing: filtering interference signals, and converting depth information into a pseudo color image;
step 6, recognizing and positioning the sieve plate by using a color camera, and determining the coordinates of the sieve plate;
and 7, acquiring depth information of the positioning area by using a depth camera, setting a change threshold of the depth information, judging whether the sieve plate breaks down, and making a record and an alarm prompt.
The method comprises the steps of identifying a sieve plate by using a color camera, sending a coordinate position of the sieve plate, obtaining depth information of a position area by using a depth camera, judging whether the sieve plate has a falling fault according to the change of the depth value, and realizing wireless communication among equipment when all the equipment are positioned under the same local area network.
Example two
This embodiment is substantially the same as the first embodiment, and is characterized in that:
in this embodiment, referring to fig. 1, an intelligent screen dropping remote detection method based on multivariate image fusion, which realizes the positioning and diagnosis of the screen plate dropping fault, includes the following steps:
step 1: binding a depth camera and a color camera, hanging a group of cameras above a medium removing screen system, and determining the number of the required cameras according to the screen plate range shot by a camera lens and the signal intensity;
step 2: the method comprises the steps that wireless communication between an industrial personal computer and a depth camera is achieved, the depth camera is connected with a wireless router through a network cable, and wireless communication with the industrial personal computer is achieved in a WiFi + Socket mode;
and step 3: build up raspberry group video server under the wireless local area network, realize the wireless communication between color camera and the industrial computer: a video server is built through a raspberry group to realize that the images of the color cameras are transmitted to an industrial personal computer in a streaming mode, and finally wireless transmission between the color cameras and the industrial personal computer is realized;
and 4, step 4: and (3) frame calibration: calibrating the received picture by combining physical alignment and software alignment; firstly, the bound depth camera and the color camera are directly opposite to the same picture, the real-time picture of the sieve plate is shot, and rough physical alignment is carried out; secondly, capturing a shot picture in proportion, and finely adjusting a corresponding position to finally achieve the effect of picture alignment;
and 5: depth data preprocessing: filtering pixels with too low or too high signal intensity in the acquired depth data; converting the depth information into a three-channel BGR pseudo color image;
step 6: and identifying and positioning the sieve plate, identifying the sieve plate in the image through an image processing algorithm after the system receives the color image, finding out pixel coordinates of four vertexes of the sieve plate, and then transmitting the vertex coordinates of the sieve plate to the depth map.
And 7: and (3) detecting the falling of the sieve plate, wherein the depth map continuously detects the depth value in the coordinate area after receiving the sieve plate coordinate transmitted from the color map, obtains the distance information between the sieve plate and the camera, judges whether the sieve plate has a falling fault according to a threshold value set by the actual distance between the sieve plate and the chute, and records all state information of the sieve plate in a log.
The method realizes the positioning and diagnosis of the sieve plate falling fault, a group of cameras are suspended above a medium-removing sieve system, a depth camera is connected with a wireless router by using a network cable, and wireless communication with an industrial personal computer is realized in a WiFi + Socket mode; a video server is built through a raspberry group to realize that the images of the color cameras are transmitted to an industrial personal computer in a streaming mode, and finally wireless transmission between the color cameras and the industrial personal computer is realized; and calibrating the received picture by combining physical alignment and software alignment, and detecting the falling of the sieve plate. The method comprises the steps of identifying a sieve plate by using a color camera, sending a coordinate position of the sieve plate, obtaining depth information of a position area by using a depth camera, judging whether the sieve plate has a falling fault according to the change of the depth value, and realizing wireless communication among equipment when all the equipment are positioned under the same local area network. The problems of strong detection hysteresis, low real-time performance and the like in the prior art are effectively solved.
EXAMPLE III
This embodiment is substantially the same as the above embodiment, and is characterized in that:
in this embodiment, an intelligent screen dropping remote detection method based on multi-element image fusion is disclosed, as shown in fig. 1, a depth image of a screen plate is obtained by a depth camera, and is accessed to a local area network via an ethernet port, and simultaneously, a color camera obtains a color image and is accessed to the local area network via a wireless network card of a video server; secondly, the industrial personal computer wirelessly obtains a depth image and a color image through the local area network, identifies and positions the sieve plate by using the color image, and simultaneously transmits the position of the sieve plate to the depth image; then, acquiring depth information of a sieve plate in a positioning area by using a depth camera, and monitoring whether the sieve plate falls or shifts in real time; and finally, when the arrival time is detected, an alarm signal is sent to the staff in time, and meanwhile, the working log of the system is updated. The intelligent screen dropping remote detection method based on the multivariate image fusion comprises the following steps:
step 1: binding a depth camera and a color camera, and adjusting an ideal working picture;
step 1.1: binding a depth camera and a color camera, and placing the depth camera and the color camera in an ideal working height range by combining the number of screen plates shot by a picture and the signal intensity of the picture; no screen plate is arranged at the edge of the picture, so that the picture calibration in the step 3 is convenient; when the number of over-exposure points in the depth image is large, the height of the camera is properly raised or the integration time of the depth camera is properly reduced for making up because the depth camera emits more light to the surface of the material;
step 1.2: installing a camera, and respectively adjusting the focal lengths of the depth camera and the color camera to clearly display the screen plate as far as possible;
step 1.3: finally determining the number of required equipment according to the steps so as to cover the whole medium removing screen surface;
step 2: building a local area network to realize wireless communication between the depth camera and the industrial personal computer;
step 2.1: the communication between an industrial personal computer and a depth camera under the same local area network is realized, and Socket communication is adopted between the industrial personal computer and the depth camera; the industrial computer sends the connection request at first to the depth camera and appoints, and this instruction has accomplished the industrial computer and to the request of depth camera send command, informs the depth camera simultaneously and prepares to connect, and the depth camera is initialized after receiving the instruction, mainly includes: an overexposure point enabling instruction and an integration time enabling instruction;
step 2.2: after the depth camera is initialized, sending a depth data acquisition instruction, a signal intensity instruction and a bottom chip temperature instruction to an industrial personal computer, and realizing that the industrial personal computer acquires data of the depth camera;
step 2.3: the method comprises the steps that after an industrial personal computer receives an instruction of a depth camera, a depth data acquisition instruction is sent, the depth camera starts to acquire data at the moment, and the acquired data are sent to the industrial personal computer in a streaming mode; the industrial personal computer stores the original data, analyzes the original data into a two-dimensional array and finally completes three-way handshake to realize communication;
and step 3: building a raspberry group video server under a wireless local area network, and finally realizing wireless transmission between a color camera and an industrial personal computer;
step 3.1: a main thread of a video server initializes a static global structure global, specifically, input and output of each component are defined at first, description is carried out by taking a functional module as a unit, and a link relation is established among the components;
step 3.2: the video server collects an image thread (Cam _ thread), the thread comprises a loop, and the loop can continue forever as long as a user does not press Ctrl + C and a stop bit in global variable global is still 0; the method comprises the following steps that a grab function uvcGrab waits for image data in a blocking mode, after a frame of video is captured and processed, a function memcpy _ picture is used for copying an image to a global buffer, and a function pthread _ cond _ broadcast is used for informing all client threads which block and wait for the data;
step 3.3: a video server image output thread (server _ thread), which contains a loop like the cam _ thread, and the loop will continue forever as long as the user does not press Ctrl + C and the stop bit in the global variable global is still 0; a server _ thread runs a TCP server, the TCP server thread is responsible for monitoring client requests, and once a request exists, a new client processing thread is created to be specially responsible for HTTP requests from the client; therefore, the TCP server can monitor the client requests all the time, and one client processing thread can only respond to the request of one IP address, so that the concurrency of the video server is improved;
step 3.4: the client processing thread blocks and waits for image updating in a global buffer (global) in a mode of mutual exclusion lock and condition variable, once the video acquisition thread updates the image in the global buffer (global), the client processing thread releases the block, locks the global buffer, copies the image frame in the global buffer to a sending buffer and sends the image frame to the client through a sprintf function;
and 4, step 4: picture calibration, so that the pictures of the depth camera and the color camera are consistent;
step 4.1: after the depth camera and the color camera are bound together, the shot pictures are kept consistent in the horizontal direction and the vertical direction;
step 4.2: after the step 4.1, the shot screen plate pictures are still difficult to keep coincident, the pictures shot by the two cameras are subjected to medium-scale scaling in the frame through fixedly detecting the size of the picture frame, and the positions of the pictures are moved, so that the effect of coincidence of the pictures of the depth camera and the color camera is finally realized;
and 5: the method comprises the steps of preprocessing depth data, and converting depth information into a three-channel BGR pseudo color image;
step 5.1: aiming at a complex industrial environment, more interference signals are generated on site, the original depth data may contain invalid data, the distance information between a lens and a screen plate is in a certain range, data higher or lower than the range is filtered correspondingly, the data smaller than the minimum depth is set as a minimum value, and the data larger than the maximum depth is set as a maximum value;
step 5.2: in an industrial field, depth data of adjacent areas generally cannot jump but shows a slowly changing state, except for invalid interference signals, random interference signals may exist around pixel points, and in order to reduce the interference, mean value filtering is used for filtering an image, so that pulse interference caused by accidental factors is overcome; the step is combined with the step 5.1, the whole processing method is equivalent to an 'amplitude limiting filtering method' + 'average filtering method', new data sampled each time are firstly subjected to amplitude limiting processing and then sent to a queue for average filtering processing, the advantages of the two filtering methods are combined, the sampling value deviation caused by accidental pulse interference can be eliminated, and the formula of the average filtering is (1):
Figure BDA0002752420750000101
wherein f isdepth(x,y)、gdepth(x, y) respectively representing the depth values of the current pixel points (x, y), wherein m is the total number of the pixel points in the template;
step 5.3: converting the depth information into a 16-bit single-channel image, wherein the size of the image is 320 x 240, and the value of a pixel point represents the depth information of the image; then converting the 16-bit image into a gray scale image of 8-bit single channel;
step 5.4: for better image display effect, the image of a single channel is converted into an 8-bit BGR three-channel image. The conversion rule is: the red channel, the green channel and the blue channel are the same as the depth value of the corresponding pixel point in the gray-scale image. Thus, the depth information is retained on the pseudo-color image to the maximum extent, and the depth data of the object can be directly obtained from the pseudo-color image. At this time, the CV _8UC3 image still retains depth information, and the conversion formula is:
R(i,j)=G(i,j)=B(i,j)=f[ggray(i,j)] (2)
wherein R (i, j), G (i, j), B (i, j) represent the values of red, green, blue channels at i row and j column, respectively, f [ G [ [ G ]gray(i,j)]Converting the CV _8UC1 gray-scale image into a CV _8UC3 pseudo-color image;
step 6: identifying and positioning the sieve plate, and transmitting the coordinates to the depth map;
step 6.1: firstly, received color pictures are binarized into a set A, and then the set A is expanded, so that a white part is expanded, subsequent contour searching is facilitated, and the formula of image expansion is as follows:
Figure BDA0002752420750000102
the formula represents the dilation of a set A with a set B, A and B being a two-dimensional integer space Z2The set of (a) to (b),
Figure BDA0002752420750000103
for the reflection of the set B, the displacement of B is z, the convolution operation is carried out on the set B and the set A, pixel points in the set A are scanned, and the set elements and binary image elements are used for carrying out 'AND' operation, if the set elements and the binary image elements are both 0, the target pixel points are 0, otherwise, the target pixel points are 1, so that the maximum value of the pixel points in the set area of the set B is calculated, the pixel values of the reference points are replaced by the maximum value to realize expansion, and the expanded image is C, as shown in figure 2 (a);
step 6.2: searching the contour in the image C, approximating the found contour by using a polygon, and finding a quadrangle from the approximated polygon; detecting the proportion of red in the quadrangle, setting the proportion of red to be more than 15%, determining the quadrangle as a sieve plate, determining the position of the sieve plate when the position of the sieve plate does not move for more than 3 seconds as shown in fig. 2(b), drawing the outline of the sieve plate in a color diagram, simultaneously recording four vertex coordinates of the sieve plate, drawing the outline of the sieve plate in a depth camera picture, and changing the color of the area of the sieve plate corresponding to an interface of an industrial personal computer from gray to green to represent that the sieve plate is ready;
and 7: acquiring depth information of a positioning area by using a depth camera, judging whether the sieve plate falls off according to a change threshold of the depth value, and giving an alarm and a record;
step 7.1: according to the step 6.2, the position information of the screen plate in the depth camera image and the color camera image is obtained at the moment, the depth value in the position area is continuously obtained by the depth camera, when the change range of the depth value is larger than 2, the screen plate is determined to have a falling fault, and the rest conditions are determined to be in a normal working state, as shown in fig. 3;
step 7.2: when the screen plate falls off, the outline of the screen plate in the picture of the depth camera is changed from black to red, the color of the corresponding screen plate area in the interface of the industrial personal computer is changed from green to red, meanwhile, an alarm sound is sent out to remind a worker to process, and the state information is recorded in a log as shown in fig. 4.
The steps 1 to 7 finish the diagnosis of the screen plate falling fault, and simultaneously determine the position of the fault point.
The intelligent screen dropping remote detection method based on the multi-element image fusion in the embodiment binds the depth camera and the color camera, and adjusts an ideal picture according to a working environment; building a local area network to realize wireless communication between the depth camera and the industrial personal computer; building a raspberry group video server under a wireless local area network, and finally realizing wireless transmission between a color camera and an industrial personal computer; picture calibration, so that the pictures of the depth camera and the color camera are unified; the method comprises the steps of preprocessing depth data, and converting depth information into a three-channel BGR pseudo color image; identifying and positioning the sieve plate, and transmitting the coordinates of the sieve plate to a depth map; and acquiring depth information of the coordinate area by using a depth camera, judging whether the sieve plate falls off according to a change threshold of the depth value, and giving an alarm and a record. The invention can detect the failure of the sieve plate falling in real time without depending on manpower, and improves the economic benefit and the intelligent level of the coal preparation plant.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. An intelligent screen dropping remote detection method based on multivariate image fusion is characterized by comprising the following operation steps:
step 1, binding a depth camera and a color camera, and determining the ideal height and the number of the cameras according to the range of a sieve plate shot by the two cameras and the signal intensity;
step 2, building a wireless local area network to realize wireless communication between the depth camera and the industrial personal computer;
step 3, a raspberry group video server under a wireless local area network is set up, and wireless communication between the color camera and the industrial personal computer is achieved;
step 4, calibrating the picture: calibrating the pictures of the depth camera and the color camera to make the pictures uniform;
step 5, depth data preprocessing: filtering interference signals, and converting depth information into a pseudo color image;
step 6, recognizing and positioning the sieve plate by using a color camera, and determining the coordinates of the sieve plate;
and 7, acquiring depth information of the positioning area by using a depth camera, setting a change threshold of the depth information, judging whether the sieve plate breaks down, and making a record and an alarm prompt.
2. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 1 are as follows:
step 1.1: binding a depth camera and a color camera, and placing the depth camera and the color camera in a working height range by combining the number of screen plates shot by a picture and the signal intensity of the picture; no screen plate is arranged at the edge of the picture, so that the picture calibration in the step 3 is convenient; when the number of over-exposure points in the depth image is large, the height of the camera is raised or the integration time of the depth camera is reduced to make up for the fact that the depth camera emits more light to the surface of the material;
step 1.2: installing a camera, and respectively adjusting the focal lengths of the depth camera and the color camera to clearly display the screen plate;
step 1.3: and finally determining the number of required devices according to the steps so as to cover the whole medium removing screen surface.
3. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 2 are as follows:
step 2.1: the communication between an industrial personal computer and a depth camera under the same local area network is realized, and Socket communication is adopted between the industrial personal computer and the depth camera; the method comprises the steps that an industrial personal computer sends a connection request specification to a depth camera, the instruction completes the request of the industrial personal computer for sending an instruction to the depth camera, and simultaneously informs the depth camera of preparing connection, and the depth camera is initialized after receiving the instruction and mainly comprises an overexposure point enabling instruction and an integration time enabling instruction;
step 2.2: after the depth camera is initialized, sending a depth data acquisition instruction, a signal intensity instruction and a bottom chip temperature instruction to an industrial personal computer, and realizing that the industrial personal computer acquires data of the depth camera;
step 2.3: the method comprises the steps that after an industrial personal computer receives an instruction of a depth camera, a depth data acquisition instruction is sent, the depth camera starts to acquire data at the moment, and the acquired data are sent to the industrial personal computer in a streaming mode; the industrial personal computer stores the original data, analyzes the original data into a two-dimensional array, and finally completes three-way handshake to realize communication.
4. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 3 are as follows:
step 3.1: a main thread of a video server initializes a static global structure global, specifically, input and output of each component are defined at first, description is carried out by taking a functional module as a unit, and a link relation is established among the components;
step 3.2: the video server collects an image thread, namely a cam _ thread, wherein the thread comprises a loop, and the loop can continue forever as long as a user does not press Ctrl + C and a stop bit in global variable global is still 0; the method comprises the following steps that a grab function uvcGrab waits for image data in a blocking mode, after a frame of video is captured and processed, a function memcpy _ picture is used for copying an image to a global buffer, and a function pthread _ cond _ broadcast is used for informing all client threads which block and wait for the data;
step 3.3: the video server image output thread, namely a server _ thread, comprises a loop like the cam _ thread, and the loop can continue forever as long as the user does not press Ctrl + C and the stop bit in the global variable global is still 0; a server _ thread runs a TCP server, the TCP server thread is responsible for monitoring client requests, and once a request exists, a new client processing thread is created to be specially responsible for HTTP requests from the client; therefore, the TCP server can monitor the client requests all the time, and one client processing thread can only respond to the request of one IP address, so that the concurrency of the video server is improved;
step 3.4: and the client processing thread blocks and waits for image updating in the global buffer global in a mode of mutual exclusion locking and condition variable, once the video acquisition thread updates the image in the global buffer global, the client processing thread unblocks and locks the global buffer, copies the image frame in the global buffer to the sending buffer and sends the image frame to the client through a sprintf function.
5. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 4 are as follows:
step 4.1: after the depth camera and the color camera are bound together, the shot pictures are kept consistent in the horizontal direction and the vertical direction;
step 4.2: after the step 4.1, the shot screen plate pictures are still difficult to keep coincident, the pictures shot by the two cameras are subjected to medium-scale scaling in the frame through fixedly detecting the size of the picture frame, and the positions of the pictures are moved, so that the effect of coincidence of the pictures of the depth camera and the color camera is finally realized.
6. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 5 are as follows:
step 5.1: aiming at a complex industrial environment, more interference signals are generated on site, the original depth data may contain invalid data, the distance information between a lens and a screen plate is in a certain range, data higher or lower than the range is filtered correspondingly, the data smaller than the minimum depth is set as a minimum value, and the data larger than the maximum depth is set as a maximum value;
step 5.2: in an industrial field, the depth data of adjacent areas generally do not jump but show a slowly changing state, except for invalid interference signals, random interference signals may exist around pixel points, and in order to reduce the interference, the image is filtered by using mean value filtering, so that pulse interference caused by accidental factors is overcome; the step is combined with the step 5.1, the whole processing method is equivalent to an amplitude limiting filtering method plus a mean filtering method, new data sampled each time are firstly subjected to amplitude limiting processing and then sent to a queue for mean filtering processing, two filtering methods are fused, the sampling value deviation caused by accidental pulse interference is eliminated, and the formula of the mean filtering is (1):
Figure FDA0002752420740000031
wherein f isdepth(x,y)、gdepth(x, y) respectively representing the depth values of the current pixel points (x, y), wherein m is the total number of the pixel points in the template;
step 5.3: converting the depth information into a 16-bit single-channel image, wherein the size of the image is 320 x 240, and the value of a pixel point represents the depth information of the image; then converting the 16-bit image into a gray scale image of 8-bit single channel;
step 5.4: in order to achieve better image display effect, converting the image of a single channel into an 8-bit BGR three-channel image; the conversion rule is: the red channel, the green channel and the blue channel are the same as the depth values of the corresponding pixel points in the gray-scale image; thus, the depth information is reserved on the pseudo-color image to the maximum extent, and the depth data of the object can be directly obtained from the pseudo-color image; at this time, the CV _8UC3 image still retains depth information, and the conversion formula is:
R(i,j)=G(i,j)=B(i,j)=f[ggray(i,j)] (2)
wherein R (i, j), G (i, j), B (i, j) represent the values of red, green, blue channels at i row and j column, respectively, f [ G [ [ G ]gray(i,j)]The CV _8UC1 gray-scale image is converted into a CV _8UC3 pseudo-color image.
7. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 6 are as follows:
step 6.1: firstly, received color pictures are binarized into an image A, and then the image A is expanded, so that a white part is expanded, subsequent contour searching is facilitated, and the formula of image expansion is as follows:
Figure FDA0002752420740000032
the formula represents the dilation of a set A with a set B, A and B being a two-dimensional integer space Z2The set of (a) to (b),
Figure FDA0002752420740000033
the reflection of the set B is performed, the displacement of the set B is z, convolution operation is performed on the set B and the set A, pixel points in the set A are scanned, and the set elements and binary image elements are used for performing AND operation, if the set elements and the binary image elements are both 0, the target pixel points are 0, otherwise the target pixel points are 1, so that the maximum value of the pixel points in the set area of the set B is calculated, the pixel values of the reference points are replaced by the maximum value to realize expansion, and the expanded image is C;
step 6.2: searching the contour in the image C, approximating the found contour by using a polygon, and finding a quadrangle from the approximated polygon; detecting the proportion of red in the quadrangle, setting the proportion of red to be more than 15%, determining the quadrangle as a sieve plate, when the position of the sieve plate does not move for more than 3 seconds, determining the position of the sieve plate, drawing the outline of the sieve plate in the color map, simultaneously recording coordinates of four vertexes of the sieve plate, drawing the outline of the sieve plate in a depth camera picture, and changing the color of the area of the sieve plate corresponding to the interface of the industrial personal computer from gray to green to represent that the sieve plate is ready.
8. The intelligent screen dropping remote detection method based on multivariate image fusion as claimed in claim 1, wherein the specific operation steps of step 7 are as follows:
step 7.1: according to the step 6.2, the position information of the sieve plate in the depth camera image and the color camera image is obtained at the moment, the depth value in the position area is continuously obtained by the depth camera, when the change range of the depth value is larger than 2, the sieve plate is determined to have a falling fault, and the rest conditions are determined to be in a normal working state;
step 7.2: when the screen plate falls off, the outline of the screen plate in the picture of the depth camera is changed from black to red, the color of the corresponding screen plate area in the interface of the industrial personal computer is changed from green to red, meanwhile, an alarm sound is sent out to remind a worker to process, and the state information is recorded in a log.
CN202011189819.9A 2020-10-30 2020-10-30 Intelligent screen dropping remote detection method based on multivariate image fusion Active CN112422818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011189819.9A CN112422818B (en) 2020-10-30 2020-10-30 Intelligent screen dropping remote detection method based on multivariate image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011189819.9A CN112422818B (en) 2020-10-30 2020-10-30 Intelligent screen dropping remote detection method based on multivariate image fusion

Publications (2)

Publication Number Publication Date
CN112422818A true CN112422818A (en) 2021-02-26
CN112422818B CN112422818B (en) 2022-01-07

Family

ID=74828036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011189819.9A Active CN112422818B (en) 2020-10-30 2020-10-30 Intelligent screen dropping remote detection method based on multivariate image fusion

Country Status (1)

Country Link
CN (1) CN112422818B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115487951A (en) * 2022-11-01 2022-12-20 天津德通电气有限公司 Medium removal sieve material flow break identification method and system
CN117459836A (en) * 2023-12-05 2024-01-26 荣耀终端有限公司 Image processing method, device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010071898A2 (en) * 2008-12-19 2010-06-24 The Johns Hopkins Univeristy A system and method for automated detection of age related macular degeneration and other retinal abnormalities
CN102609941A (en) * 2012-01-31 2012-07-25 北京航空航天大学 Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
US20150302310A1 (en) * 2013-03-15 2015-10-22 Nordic Technology Group Methods for data collection and analysis for event detection
CN108519768A (en) * 2018-03-26 2018-09-11 华中科技大学 A kind of method for diagnosing faults analyzed based on deep learning and signal
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN111178257A (en) * 2019-12-28 2020-05-19 深圳奥比中光科技有限公司 Regional safety protection system and method based on depth camera
CN111242080A (en) * 2020-01-21 2020-06-05 南京航空航天大学 Power transmission line identification and positioning method based on binocular camera and depth camera
CN111476194A (en) * 2020-04-20 2020-07-31 海信集团有限公司 Detection method for working state of sensing module and refrigerator
CN111739080A (en) * 2020-07-23 2020-10-02 成都艾尔帕思科技有限公司 Method for constructing 3D space and 3D object by multiple depth cameras

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010071898A2 (en) * 2008-12-19 2010-06-24 The Johns Hopkins Univeristy A system and method for automated detection of age related macular degeneration and other retinal abnormalities
CN102609941A (en) * 2012-01-31 2012-07-25 北京航空航天大学 Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
US20150302310A1 (en) * 2013-03-15 2015-10-22 Nordic Technology Group Methods for data collection and analysis for event detection
CN108519768A (en) * 2018-03-26 2018-09-11 华中科技大学 A kind of method for diagnosing faults analyzed based on deep learning and signal
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN111178257A (en) * 2019-12-28 2020-05-19 深圳奥比中光科技有限公司 Regional safety protection system and method based on depth camera
CN111242080A (en) * 2020-01-21 2020-06-05 南京航空航天大学 Power transmission line identification and positioning method based on binocular camera and depth camera
CN111476194A (en) * 2020-04-20 2020-07-31 海信集团有限公司 Detection method for working state of sensing module and refrigerator
CN111739080A (en) * 2020-07-23 2020-10-02 成都艾尔帕思科技有限公司 Method for constructing 3D space and 3D object by multiple depth cameras

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115487951A (en) * 2022-11-01 2022-12-20 天津德通电气有限公司 Medium removal sieve material flow break identification method and system
CN117459836A (en) * 2023-12-05 2024-01-26 荣耀终端有限公司 Image processing method, device and storage medium
CN117459836B (en) * 2023-12-05 2024-05-10 荣耀终端有限公司 Image processing method, device and storage medium

Also Published As

Publication number Publication date
CN112422818B (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN108827970B (en) AOI system-based method and system for automatically judging defects of different panels
CN112422818B (en) Intelligent screen dropping remote detection method based on multivariate image fusion
CN103646250B (en) Pedestrian monitoring method and device based on distance image head and shoulder features
US8712149B2 (en) Apparatus and method for foreground detection
CN113887412B (en) Detection method, detection terminal, monitoring system and storage medium for pollution emission
CN110769246A (en) Method and device for detecting faults of monitoring equipment
CN109376660B (en) Target monitoring method, device and system
CN104601956A (en) Power transmission line online monitoring system and method based on fixed-wing unmanned aerial vehicle
CN107944342A (en) A kind of scrapper conveyor abnormal state detection system based on machine vision
CN112784821A (en) Building site behavior safety detection and identification method and system based on YOLOv5
CN110807765A (en) Suspension insulator string inclination detection method and system based on image processing
CN106101622A (en) A kind of big data-storage system
CN102879404B (en) System for automatically detecting medical capsule defects in industrial structure scene
CN110560376A (en) Product surface defect detection method and device
JP3486229B2 (en) Image change detection device
CN107547839A (en) Remote control table based on graphical analysis
CN115953719A (en) Multi-target recognition computer image processing system
CN105933676A (en) Remote control platform based on image analysis
CN107895365B (en) Image matching method and monitoring system for power transmission channel external damage protection
KR102210571B1 (en) Bridge and tunnel safety diagnosis remote monitoring alarm method using GPS coordinates and mobile communication system
CN114299451A (en) System and method for identifying shielding of deep learning monitoring video
CN116188348A (en) Crack detection method, device and equipment
CN112396024A (en) Forest fire alarm method based on convolutional neural network
CN107918941B (en) Visual monitoring system and method for power transmission channel external damage protection
CN114973398B (en) Hierarchical alarm method for view library camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant