CN114359378A - Method for positioning inspection robot of belt conveyor - Google Patents

Method for positioning inspection robot of belt conveyor Download PDF

Info

Publication number
CN114359378A
CN114359378A CN202111674489.7A CN202111674489A CN114359378A CN 114359378 A CN114359378 A CN 114359378A CN 202111674489 A CN202111674489 A CN 202111674489A CN 114359378 A CN114359378 A CN 114359378A
Authority
CN
China
Prior art keywords
image
target
pixel points
curve
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111674489.7A
Other languages
Chinese (zh)
Inventor
牟宗魁
胡勇
宋惜飞
陈智奎
张海军
钟超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN ZIGONG CONVEYING MACHINE GROUP CO Ltd
Original Assignee
SICHUAN ZIGONG CONVEYING MACHINE GROUP CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN ZIGONG CONVEYING MACHINE GROUP CO Ltd filed Critical SICHUAN ZIGONG CONVEYING MACHINE GROUP CO Ltd
Priority to CN202111674489.7A priority Critical patent/CN114359378A/en
Publication of CN114359378A publication Critical patent/CN114359378A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a positioning method of a belt conveyor inspection robot, which comprises the following steps: reading an image shot by the inspection robot, scanning the image line by line and line by line, and removing lines and lines which do not contain target pixel points to obtain a primary processed image; screening out invalid pixel points in the primary processing image, acquiring the number and the positions of the valid pixel points, and placing the valid pixel points at corresponding positions in a target image template by using the position information of the valid pixel points to form a secondary processing image; comparing the secondary processing image with reference values of the number of rows and the number of columns of the target image; fitting the pixel points in the secondary processing image to obtain an initial contour curve of the secondary processing image; and extracting the initial contour curve as edge digital information, comparing the obtained edge digital information with the edge digital information of the object to be detected, and judging the positioning position. The method can effectively reduce the calculation difficulty, improve the positioning precision, has good robustness and is suitable for the field environment.

Description

Method for positioning inspection robot of belt conveyor
Technical Field
The invention relates to the technical field of inspection robot positioning systems, in particular to a positioning method of an inspection robot of a belt conveyor.
Background
At present, belt conveyors are widely used in the fields of electric power, steel, coal, water conservancy, chemical industry, metallurgy, building materials and the like, and meanwhile, belt conveyor systems are developing towards large-scale production with high belt speed, high power, large transportation capacity and long distance.
In recent years, as various devices in a belt conveyor tend to be complex, the requirements for real-time monitoring and inspection are increasingly increased. At present, the manual inspection mode is mainly adopted, but the manual inspection has certain danger because the equipment is possibly in a severe working environment, the possibility of human errors exists in the manual monitoring, meanwhile, the labor cost continuously rises, and the necessity of replacing the manual inspection by the machine is higher and higher. And the positioning system is used as an essential key system in the inspection robot and is crucial to inspection accuracy. The commonly used positioning method at present has the modes of digital matching, two-dimension code identification, stepping accumulation and the like. The data volume required by digital matching is huge, and false recognition is easily caused by noise interference; the two-dimensional code mode is easy to damage or fall off in a field environment for a long time, and the service life is short; after the stepping accumulation mode is operated for a long time, large accumulated errors easily exist, and the identification accuracy is influenced.
Disclosure of Invention
The invention aims to at least solve the problems that the existing positioning method in the prior art has huge data quantity and influences the calculation processing efficiency; and is easy to be interfered by noise to cause error identification, is not suitable for outdoor environment, and has the technical problems of inaccurate positioning and larger error.
The invention provides a positioning method of a belt conveyor inspection robot, which comprises the following steps:
s1, reading the image shot by the inspection robot, scanning the image line by line and line by line, and removing the lines and lines which do not contain the target pixel points to obtain a primary processed image;
s2, removing invalid pixel points in the primary processed image, obtaining the number and the positions of the valid pixel points, and placing the valid pixel points at corresponding positions in the target image template by using the position information of the valid pixel points to form a secondary processed image;
s3, acquiring the number of rows and columns of the secondary processing image, comparing the number of rows and columns with the reference value of the number of columns of the target image, if the deviation between the number of rows and columns of the secondary processing image and the reference value of the target image is within the error allowable range, carrying out the next step, otherwise, returning to S1;
s4, fitting the pixel points in the secondary processing image by adopting a B spline curve method to obtain an initial contour curve of the secondary processing image;
and S5, extracting the initial contour curve into edge digital information by using a Snake algorithm, comparing the obtained edge digital information with the edge digital information of the object to be detected, and judging the positioning position.
Further, the method of acquiring a primary processed image in S1 includes:
starting from the edge of one side of the image and scanning towards the other opposite side, stopping scanning in the direction after scanning a target pixel point, performing image segmentation on a row or a column where the target pixel point is located, reserving the row or the column and an unscanned partial image, and repeating the scanning and segmenting processes on the unscanned partial image in other directions until image processing in all directions is completed to obtain a primary processed image;
further, in S2, the once processed image obtained in S1 is scanned row by row, the position and number of target pixels in each row and each column are recorded, two or two adjacent rows and two or more columns are extracted, the position and number of the target pixels are compared, if there is a target pixel adjacent to any one target pixel in at least one direction around the target pixel, the pixel is a valid pixel, otherwise, the pixel is an invalid pixel.
Further, the arithmetic mean value of the number of rows or columns of pixel points of the target image in at least two images is used as a reference value of the number of rows or columns, and the error of the secondary processing image compared with the reference value is lower than 3% -5%.
Further, the target image template used in S2 is equal in size to the target image region for which the reference value is acquired.
Further, the B-spline curve function is as follows:
Figure BDA0003451137050000021
wherein t ismin≤t≤tmax,2≤d≤n
Wherein the content of the first and second substances,
Figure BDA0003451137050000022
representing coordinate vectors of points on the curve, n being a control point
Figure BDA0003451137050000023
The number of the components is equal to or less than the total number of the components,
Figure BDA0003451137050000024
for control point coordinates (i starts from 0), Bi,d(t) is the polynomial coefficient of the control point coordinate influence weight, i represents the index of the coordinate, n represents the highest power number of the polynomial, and d is the number of times of influencing the B spline curve; t represents the value of the time function when the curve is drawn.
Further, 3-order spline curves in the B-spline curves are selected as the initial contour, namely d is 3.
Further, the step of extracting the initial contour curve as digital information by using a Snake algorithm comprises the following steps:
s51, reading in the initial contour curve fitted by the B spline;
s52, setting input parameters, wherein the input parameters comprise an internal energy coefficient, iterative computation times and filtering parameters;
s53, carrying out Gaussian filtering on the initial contour curve;
s54, solving a Snake algorithm function by adopting an iterative calculation method to obtain digital information;
and S55, outputting the digital information.
Further, the Snake algorithm function is defined as follows:
Figure BDA0003451137050000031
wherein c (S) is the final result curve obtained in S4, i.e. the initial profile curve
Figure BDA0003451137050000032
The transformation yields (s ∈ [0,1 ]]) (ii) a Alpha and beta are positive real parameters, namely internal energy coefficients, and are used for adjusting the continuity and the smoothness of the curve; c. Cs(s)、css(s) first and second derivatives of the initial profile curve, respectively;
Figure BDA0003451137050000033
the gradient of the image I (x(s), y (s)).
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
through two times of image processing, a target image is obtained, unnecessary information in the image is removed, interference information in the image is reduced, and false recognition caused by noise is avoided;
the method comprises the steps of fitting pixel points into an initial contour curve by a B spline curve method, solving a minimum value of an energy function through a Snake algorithm, gradually approaching the initial contour curve to the edge of an object to be detected under the drive of the minimum value of the energy function, finally extracting digital information of a target edge, and comparing the digital information of the edge with the digital information of the edge of the object to be detected to realize positioning judgment, wherein the central data of the target image does not need to be processed, only the edge data needs to be compared, and the problems of low calculation efficiency and high calculation difficulty caused by mass data are solved;
the positioning precision of the robot is improved, the robustness is good, and the robot is suitable for long-term application in severe environments such as the field where the belt conveyor is located.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for positioning a belt conveyor inspection robot according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a positioning method of a belt conveyor inspection robot according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
A method of positioning a belt conveyor inspection robot according to some embodiments of the present invention is described below with reference to fig. 1 to 2.
A method for positioning a belt conveyor inspection robot comprises the following steps:
s1, reading the image shot by the inspection robot, scanning the image line by line and line by line, and removing the lines and lines which do not contain the target pixel points to obtain a primary processed image;
the method for acquiring the primary processing image comprises the following steps: starting from the edge of one side of the image and scanning towards the other opposite side, stopping scanning in the direction after scanning a target pixel point, performing image segmentation on a row or a column where the target pixel point is located, reserving the row or the column and an unscanned partial image, and repeating the scanning and segmenting processes on the unscanned partial image in other directions until image processing in all directions is completed to obtain a primary processed image;
further, the method comprises the following steps:
s11, scanning the image line by line from top to bottom, stopping scanning when a target pixel point is scanned, recording the number of lines of the pixel point and the positions and the number of all the pixel points in the line, taking the line as an image segmentation position, reserving the line and an unscanned part, and removing the line which does not contain the target pixel point in the scanned part;
s12, scanning the image reserved in S11 line by line from bottom to top, stopping scanning when a target pixel point is scanned, recording the number of lines of the pixel point and the positions and the number of all the pixel points in the line, reserving the line and an unscanned part by taking the line as an image segmentation position, and removing the line which does not contain the target pixel point in the scanned part;
s13, scanning the image reserved in S12 from left to right column by column, stopping scanning when a target pixel point is scanned, recording the column number of the pixel point in the column and the positions and the number of all the pixel points in the column, reserving the column and an unscanned part by taking the column as an image segmentation position, and removing the column which does not contain the target pixel point in the scanned part;
s14, scanning the image reserved in S13 from right to left line by line, stopping scanning when a target pixel point is scanned, recording the number of lines of the pixel point and the positions and the number of all pixel points in the line, reserving the line and an unscanned part by taking the line as an image segmentation position, and removing the line which does not contain the target pixel point in the scanned part;
the sequence of the steps S11 to S14 may be arbitrarily changed or may be performed simultaneously, thereby obtaining a primary processed image. In this specification, a target pixel point is understood to be a pixel point that matches a target image, where the target image may be set to an easily recognizable pattern or color, thereby improving positioning accuracy.
S2, removing invalid pixel points in the primary processed image, obtaining the number and the positions of the valid pixel points, and placing the valid pixel points at corresponding positions in the target image template by using the position information of the valid pixel points to form a secondary processed image;
in S2, scanning the once processed image obtained in S1 row by row, and recording the position and the number of target pixel points in each row and each column, extracting any two adjacent rows or two columns, and comparing the position and the number of the target pixel points; if adjacent target pixels exist in at least one direction around any target pixel, namely, pixels exist in at least one direction in a pixel range around the position of the target pixel and are connected with the target pixel, the pixel is an effective pixel, otherwise, the pixel is an invalid pixel, and the effective pixel is copied to a corresponding position in a target image template to form a secondary processing image, wherein the target image template is a template frame preset according to the graphic data of the target image.
S3, acquiring the number of rows and columns of the secondary processing image, comparing the number of rows and columns with the reference value of the number of columns of the target image, if the deviation between the number of rows and columns of the secondary processing image and the reference value of the target image is within the error allowable range, carrying out the next step, otherwise, returning to S1;
scanning the secondary processing image, recording the number of rows and the number of columns of pixel points of the secondary processing image, respectively taking the number of rows or the number of columns of pixel points of a target image in 5-7 images at the same position, calculating the arithmetic mean value of the number of rows or the number of columns, taking the arithmetic mean value of the number of rows or the number of columns as a row number or column number reference value, if the error between the number of rows and the number of columns of the secondary processing image and the reference value is within an allowable range, indicating that the taken pixel point is accurate, otherwise, indicating that the taken pixel point is wrong, and returning to S1 for image scanning again; wherein the error tolerance of the comparison of the number of rows and columns of the secondary processed image with the reference value is lower than 3% -5%.
The target image template used in S2 is equal in size to the target image region for obtaining the reference value.
S4, fitting the pixel points in the secondary processing image by adopting a B spline curve method to obtain an initial contour curve of the secondary processing image;
the B-spline curve function is as follows:
Figure BDA0003451137050000061
wherein t ismin≤t≤tmax,2≤d≤n
Wherein the content of the first and second substances,
Figure BDA0003451137050000062
representing coordinate vectors of points on the curve, n being a control point
Figure BDA0003451137050000063
The number of the components is equal to or less than the total number of the components,
Figure BDA0003451137050000064
for control point coordinates (i starts from 0), Bi,d(t) is the polynomial coefficient of the control point coordinate influence weight, i represents the index of the coordinate, n represents the highest power number of the polynomial, and d is the number of times of influencing the B spline curve; t represents the value of the time function when the curve is drawn.
Bi,d(t) is a spline curve basis function, and the recurrence formula is as follows:
Figure BDA0003451137050000065
Figure BDA0003451137050000066
the influence of the B-spline curve order on the performance of the track is large, the order is poor in low smoothness, and the order is high to easily cause oscillation, so that the smoothness and the oscillation are considered, the 3-order spline curve is selected as the optimal selection, namely the 3-order spline curve in the B-spline curve is selected as the initial contour, and the value of d is 3.
And S5, extracting the initial contour curve into edge digital information by using a Snake algorithm, comparing the obtained edge digital information with the edge digital information of the object to be detected, and judging the positioning position.
The method for extracting the initial contour curve into digital information by using the Snake algorithm comprises the following steps:
s51, reading in the initial contour curve fitted by the B spline;
s52, setting input parameters, wherein the input parameters comprise an internal energy coefficient, iterative computation times and filtering parameters;
s53, carrying out Gaussian filtering on the initial contour curve;
s54, solving a Snake algorithm function by adopting an iterative calculation method to obtain digital information;
and S55, outputting the digital information.
The Snake algorithm function in S54 is defined as follows:
Figure BDA0003451137050000071
wherein c (S) is the final result curve obtained in S4, i.e. the initial profile curve
Figure BDA0003451137050000072
The transformation yields (s ∈ [0,1 ]]) (ii) a Alpha and beta are positive real parameters, namely internal energy coefficients, and are used for adjusting the continuity and the smoothness of the curve; c. Cs(s)、css(s) first and second derivatives of the initial profile curve, respectively;
Figure BDA0003451137050000073
the gradient of the image I (x(s), y (s)).
The method comprises the steps of fitting pixel points into an initial contour curve by a B spline curve method, solving a minimum value of an energy function through a Snake algorithm, gradually approaching the initial contour curve to the edge of an object to be detected under the drive of the minimum value of the energy function, finally extracting digital information of a target edge, comparing the edge digital information with the edge digital information of the object to be detected through a software code, realizing positioning judgment, associating with a control system, and realizing positioning control.
As shown in fig. 2, the specific operation flow in one embodiment of the method is as follows: scanning image pixel points line by line after obtaining an image, judging whether target pixel points exist in the line, if the target pixel points exist in the line, stopping scanning, recording the line number of the line and the positions and the number of the pixel points in the line, and if not, continuing scanning until the scanning of all the lines is completed; performing image pre-segmentation at the row position where scanning is stopped, and removing rows which do not contain target pixel points in the scanned image; on the basis, scanning image pixel points row by row, judging whether a target pixel point exists in the row, if so, stopping scanning, recording the row number of the row and the positions and the number of the pixel points in the row, and otherwise, continuing scanning until the scanning of all the rows is finished; performing image pre-segmentation at the position of the column where the scanning is stopped, and removing the column which does not contain the target pixel point in the scanned image; obtaining a primary processing image, scanning the pre-divided primary processing image row by row, recording the positions and the number of target pixel points in rows or columns corresponding to the number of rows or columns, comparing the target pixel points of two adjacent rows, copying the point to a corresponding position in a target image template as an effective pixel point if at least one target pixel point connected with the point exists in a pixel point range around any one target pixel point, otherwise, not copying the point as an ineffective pixel point, repeating the steps until the scanning and the comparison of all the target pixel points in the primary processing image are completed, and using a set of the target pixel points copied in the target image template as a secondary processing image; scanning the number of rows and the number of columns of the secondary processing image, comparing the number of rows and the number of columns with reference values, fitting a set of target pixel points in the secondary processing image into an initial contour curve by adopting a B spline curve if the errors of the two are within an allowable range, solving the minimum value of an energy function for the initial contour curve by using a Snake algorithm, gradually approaching the initial contour curve to the edge of an object to be detected under the drive of the minimum value of the energy function, finally extracting digital information of the target edge, re-scanning the image if the errors of the two exceed the allowable range, and recognizing whether the extracted digital information of the target edge is consistent with the digital information of the target image edge in a database by using software to complete positioning judgment.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for positioning a belt conveyor inspection robot is characterized by comprising the following steps:
s1, reading the image shot by the inspection robot, scanning the image line by line and line by line, and removing the lines and lines which do not contain the target pixel points to obtain a primary processed image;
s2, removing invalid pixel points in the primary processed image, obtaining the number and the positions of the valid pixel points, and placing the valid pixel points at corresponding positions in the target image template by using the position information of the valid pixel points to form a secondary processed image;
s3, acquiring the number of rows and columns of the secondary processing image, comparing the number of rows and columns with the reference value of the number of columns of the target image, if the deviation between the number of rows and columns of the secondary processing image and the reference value of the target image is within the error allowable range, carrying out the next step, otherwise, returning to S1;
s4, fitting the pixel points in the secondary processing image by adopting a B spline curve method to obtain an initial contour curve of the secondary processing image;
and S5, extracting the initial contour curve into edge digital information by using a Snake algorithm, comparing the obtained edge digital information with the edge digital information of the object to be detected, and judging the positioning position.
2. The method for positioning the inspection robot of the belt conveyor according to claim 1, wherein the step of acquiring the processed image at the step S1 includes:
starting from the edge of one side of the image and scanning towards the other opposite side, stopping scanning in the direction after scanning the target pixel point, performing image segmentation on the row or column where the target pixel point is located, reserving the row or column and an unscanned partial image, and repeating the scanning and segmentation processes on the unscanned partial image in other directions until image processing in all directions is completed to obtain a primary processed image.
3. The method for positioning the inspection robot of the belt conveyor according to claim 1, wherein in S2, the once processed image obtained in S1 is scanned row by row and column by column, the position and the number of the target pixels in each row and each column are recorded, any two adjacent rows or two adjacent columns are extracted, the position and the number of the target pixels are compared, if the target pixels adjacent to any target pixel exist in at least one direction around the target pixel, the target pixel is a valid pixel, and if not, the target pixel is an invalid pixel.
4. The method for positioning the inspection robot of the belt conveyor according to any one of claims 1 to 3, wherein an arithmetic mean value of the number of rows or columns of pixel points of the target image in at least two images is used as a reference value of the number of rows or columns, and an allowable error of comparison between the secondarily processed image and the reference value is lower than 3% -5%.
5. The method for positioning a belt conveyor inspection robot according to claim 4, wherein the target image template used in S2 is equal in size to the target image used to obtain the reference value.
6. The method for positioning the inspection robot of the belt conveyor according to any one of claims 1 to 3, wherein a B-spline curve function is as follows:
Figure FDA0003451137040000021
wherein t ismin≤t≤tmax,2≤d≤n
Wherein the content of the first and second substances,
Figure FDA0003451137040000022
representing coordinate vectors of points on the curve, n being a control point
Figure FDA0003451137040000023
The number of the components is equal to or less than the total number of the components,
Figure FDA0003451137040000024
for control point coordinates (i starts from 0), Bi,d(t) is the polynomial coefficient of the control point coordinate influence weight, i represents the index of the coordinate, n represents the highest power number of the polynomial, and d is the number of times of influencing the B spline curve; t represents the value of the time function when the curve is drawn.
7. The method for positioning the inspection robot of the belt conveyor according to claim 6, wherein 3 times of spline curves in the B-spline curves are selected as initial contours, namely d is 3.
8. The method of any one of claims 1 to 3, wherein extracting the initial profile curves as digital information using a Snake algorithm comprises the steps of:
s51, reading in the initial contour curve fitted by the B spline;
s52, setting input parameters, wherein the input parameters comprise an internal energy coefficient, iterative computation times and filtering parameters;
s53, carrying out Gaussian filtering on the initial contour curve;
s54, solving a Snake algorithm function by adopting an iterative calculation method to obtain digital information;
and S55, outputting the digital information.
9. The method of claim 8, wherein the Snake algorithm function in S54 is defined as follows:
Figure FDA0003451137040000025
wherein c (S) is the final result curve obtained in S4, i.e. the initial profile curve
Figure FDA0003451137040000026
The transformation yields (s ∈ [0,1 ]]) (ii) a Alpha and beta are positive real parameters, namely internal energy coefficients, and are used for adjusting the continuity and the smoothness of the curve; c. Cs(s)、css(s) first and second derivatives of the initial profile curve, respectively;
Figure FDA0003451137040000027
the gradient of the image I (x(s), y (s)).
CN202111674489.7A 2021-12-31 2021-12-31 Method for positioning inspection robot of belt conveyor Pending CN114359378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111674489.7A CN114359378A (en) 2021-12-31 2021-12-31 Method for positioning inspection robot of belt conveyor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111674489.7A CN114359378A (en) 2021-12-31 2021-12-31 Method for positioning inspection robot of belt conveyor

Publications (1)

Publication Number Publication Date
CN114359378A true CN114359378A (en) 2022-04-15

Family

ID=81105638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111674489.7A Pending CN114359378A (en) 2021-12-31 2021-12-31 Method for positioning inspection robot of belt conveyor

Country Status (1)

Country Link
CN (1) CN114359378A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114906534A (en) * 2022-04-25 2022-08-16 四川省自贡运输机械集团股份有限公司 Method and system for positioning mobile robot of belt conveyor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06138137A (en) * 1992-10-26 1994-05-20 Toshiba Corp Moving-object extraction apparatus
JP2001195583A (en) * 2000-01-14 2001-07-19 Canon Inc Position detector and exposure device using the same
CN103455999A (en) * 2012-06-05 2013-12-18 北京工业大学 Automatic vessel wall edge detection method based on intravascular ultrasound image sequence
CN108921818A (en) * 2018-05-30 2018-11-30 华南理工大学 A kind of weld joint tracking laser center line drawing method
CN109064475A (en) * 2018-09-11 2018-12-21 深圳辉煌耀强科技有限公司 For the image partition method and device of cervical exfoliated cell image
CN110276771A (en) * 2018-03-16 2019-09-24 义乌工商职业技术学院 A kind of contour extraction method based on snake model
CN113450402A (en) * 2021-07-16 2021-09-28 天津理工大学 Navigation center line extraction method for vegetable greenhouse inspection robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06138137A (en) * 1992-10-26 1994-05-20 Toshiba Corp Moving-object extraction apparatus
JP2001195583A (en) * 2000-01-14 2001-07-19 Canon Inc Position detector and exposure device using the same
CN103455999A (en) * 2012-06-05 2013-12-18 北京工业大学 Automatic vessel wall edge detection method based on intravascular ultrasound image sequence
CN110276771A (en) * 2018-03-16 2019-09-24 义乌工商职业技术学院 A kind of contour extraction method based on snake model
CN108921818A (en) * 2018-05-30 2018-11-30 华南理工大学 A kind of weld joint tracking laser center line drawing method
CN109064475A (en) * 2018-09-11 2018-12-21 深圳辉煌耀强科技有限公司 For the image partition method and device of cervical exfoliated cell image
CN113450402A (en) * 2021-07-16 2021-09-28 天津理工大学 Navigation center line extraction method for vegetable greenhouse inspection robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李立鸣,唐克伦,文华斌: "基于力学原理的主动轮廓模型诠释", 图学学报, vol. 38, no. 5, 31 October 2017 (2017-10-31), pages 738 *
高新波等: "现代图像分析", 31 May 2011, 西安科技大学出版社, pages: 82 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114906534A (en) * 2022-04-25 2022-08-16 四川省自贡运输机械集团股份有限公司 Method and system for positioning mobile robot of belt conveyor
CN114906534B (en) * 2022-04-25 2023-12-22 四川省自贡运输机械集团股份有限公司 Positioning method and system for mobile robot of belt conveyor

Similar Documents

Publication Publication Date Title
CN109118500B (en) Image-based three-dimensional laser scanning point cloud data segmentation method
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN107633192B (en) Bar code segmentation and reading method based on machine vision under complex background
CN106960208B (en) Method and system for automatically segmenting and identifying instrument liquid crystal number
CN108764229B (en) Water gauge image automatic identification method based on computer vision technology
CN110197153B (en) Automatic wall identification method in house type graph
CN110647795A (en) Form recognition method
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN112966542A (en) SLAM system and method based on laser radar
CN109724988B (en) PCB defect positioning method based on multi-template matching
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN116309577B (en) Intelligent detection method and system for high-strength conveyor belt materials
CN110472640B (en) Target detection model prediction frame processing method and device
CN114359378A (en) Method for positioning inspection robot of belt conveyor
CN111310754A (en) Method for segmenting license plate characters
CN113610052A (en) Tunnel water leakage automatic identification method based on deep learning
Ham et al. Recognition of raised characters for automatic classification of rubber tires
CN116823827B (en) Ore crushing effect evaluation method based on image processing
CN112102189B (en) Line structure light bar center line extraction method
CN113705564A (en) Pointer type instrument identification reading method
CN108537798B (en) Rapid super-pixel segmentation method
CN115457044A (en) Pavement crack segmentation method based on class activation mapping
CN107742036A (en) A kind of shoe pattern automatic discharge system of processing
CN114387592A (en) Character positioning and identifying method under complex background
CN113989793A (en) Graphite electrode embossed seal character recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination