CN109760006B - Nuclear robot rapid positioning method based on visual reference piece - Google Patents

Nuclear robot rapid positioning method based on visual reference piece Download PDF

Info

Publication number
CN109760006B
CN109760006B CN201910043308.7A CN201910043308A CN109760006B CN 109760006 B CN109760006 B CN 109760006B CN 201910043308 A CN201910043308 A CN 201910043308A CN 109760006 B CN109760006 B CN 109760006B
Authority
CN
China
Prior art keywords
nuclear
center
robot
visual
visual reference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910043308.7A
Other languages
Chinese (zh)
Other versions
CN109760006A (en
Inventor
张静
刘满禄
张华�
周建
肖宇峰
王基生
李树春
王亚翔
张敦凤
熊开封
王姮
刘冉
刘桂华
任万春
徐锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN201910043308.7A priority Critical patent/CN109760006B/en
Publication of CN109760006A publication Critical patent/CN109760006A/en
Application granted granted Critical
Publication of CN109760006B publication Critical patent/CN109760006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a nuclear robot based on a visual reference piece, which comprises an integral motion mechanism, an X-Y fine adjustment mechanism, a lifting mechanism, a control cabinet, a nuclear detector, a visual servo part and four Macna mother wheels, wherein the integral motion mechanism is arranged on the front end of the control cabinet; a nuclear robot rapid positioning method based on a visual reference piece comprises the following steps: step S1: the method comprises the following steps of taking video information of a radiation-resistant camera and an out-of-pile monitoring camera carried by a nuclear robot as a guide to enable the nuclear robot to reach a detection well; step S2: adjusting the pose of the nuclear robot through visual guidance; step S3: performing visual servo coarse positioning on the nuclear robot through a radiation-resistant camera carried by the nuclear robot and four Machner mother wheels of the nuclear robot; step S4: the X-Y fine adjustment mechanism on the platform and the radiation-resistant camera carried by the nuclear robot are used for carrying out visual servo accurate positioning on the nuclear robot, and the problem that accurate positioning cannot be carried out on the detection well when an external nuclear is installed in a strong irradiation and narrow reactor cavity in an auxiliary mode is solved.

Description

Nuclear robot rapid positioning method based on visual reference piece
Technical Field
The invention relates to the field of nuclear robots, in particular to a nuclear robot based on a visual reference piece and a rapid positioning method.
Background
The out-of-reactor nuclear instrumentation system is a safety-level device that continuously monitors reactor power, power level changes, and power distribution by measuring the neutron fluence rate leaking from the reactor core, and is an important input parameter for reactor protection systems and five major control systems of power plants. The out-of-core nuclear measurement detector is an eye of an out-of-core nuclear measurement instrument system, is arranged at the periphery of the reactor pressure vessel and directly detects the neutron fluence rate level of reactor core leakage.
At present, the external detector adopts a mode of upwards installing from the bottom of the reactor cavity, the installing mode is limited by the narrow installing space at the bottom of the reactor cavity, and the nuclear detection detector outside the reactor must be installed by using a special tool in sections, so the operation steps are complex, and quite long installing time is needed.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a nuclear robot based on a visual reference piece and a rapid positioning method, and solves the problem that a detection well cannot be accurately positioned when an external nuclear is installed in a strong-irradiation narrow reactor cavity in an auxiliary manner.
A nuclear robot based on a visual reference piece comprises an integral motion mechanism, an X-Y fine adjustment mechanism, a lifting mechanism, a control cabinet, a nuclear detector, a visual servo part and four Macnay mother wheels; the X-Y fine adjustment mechanism is arranged on the top surface of the whole motion mechanism, the nuclear detector is arranged on the X-Y fine adjustment mechanism in a penetrating mode, the lifting mechanism is arranged on the outer surface of the detector, the visual servo part is movably arranged on the side surface of the lifting mechanism, the four Mikana female wheels are arranged on the four corners of the bottom surface of the whole motion mechanism, and the control cabinet is arranged on the outer surface of the lifting mechanism.
Preferably, the visual servo part 6 includes a radiation-resistant camera 6-1 and a visual reference 6-2, the radiation-resistant camera 6-1 being disposed on a top surface of the visual reference 6-2.
Preferably, the method for quickly positioning the nuclear robot based on the visual reference piece comprises the following steps:
step S1: the method comprises the following steps of taking video information of a radiation-resistant camera and an out-of-pile monitoring camera carried by a nuclear robot as a guide to enable the nuclear robot to reach a detection well;
step S2: adjusting the pose of the nuclear robot through visual guidance;
step S3: performing visual servo coarse positioning on the nuclear robot through a radiation-resistant camera carried by the nuclear robot and four Machner mother wheels of the nuclear robot;
step S4: and carrying out visual servo accurate positioning on the nuclear robot by using an X-Y fine adjustment mechanism on the platform and a radiation-resistant camera carried by the nuclear robot.
Preferably, step S3 includes the following substeps:
step S3A: detecting an inner ring of the detection well by using Hough circle detection;
step S3B: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the X direction to actual distance difference;
step S3C: controlling the four Michelson master wheels to move in the X direction;
step S3D: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 3 cm; if yes, go to step S3E; if not, returning to the step S3A;
step S3E: detecting an inner ring of the detection well by using Hough circle detection;
step S3F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S3G: controlling the four Michelson master wheels to move in the Y direction;
step S3H: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the Y direction is less than 3 cm; if yes, go to step S4; if not, the process returns to step S3E.
Preferably, step S4 includes the following substeps:
step S4A: detecting an inner ring of the detection well by using Hough circle detection;
step S4B: mapping pixel difference of the center of the inner ring of the detection well and the center of the visual reference piece in the X direction to actual distance difference;
step S4C: controlling the X-Y fine adjustment mechanism to move in the X direction;
step S4D: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 0.5 cm; if yes, go to step S4E; if not, returning to the step S4A;
step S4E: detecting an inner ring of the detection well by using Hough circle detection;
step S4F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S4G: controlling the X-Y fine adjustment mechanism to move in the Y direction;
step S4H: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference piece in the Y direction is less than 0.5 cm; if yes, ending the program; if not, the step S4E is returned to.
Preferably, the Hough circle detection comprises the steps of:
step SA: collecting an image;
step SB: carrying out graying processing on the image to obtain a gray value of the image;
step SC: carrying out edge extraction on the gray value of the image by using a Solbel operator to obtain an image value after edge extraction;
step SD: carrying out binarization processing and noise filtering processing on the image value after edge extraction to obtain an edge point value;
step SF: quantizing parameter space, establishing a space a axis, b axis and r axis of three axes, calculating all points with distance r from edge point values, and establishing a column matrix [ a, b ]]T
Step SH: for matrices of different distances r [ a, b ]]TRow-column vector accumulation is carried out;
step SI: and taking the accumulated maximum value as the center of a circle in the corresponding image space.
Preferably, the edge extraction formula of the Solbel operator is:
fx(i,j)=Gx*g(i,j)
fy(i,j)=Gy*g(i,j)
Figure GDA0002573963720000041
in the formula, GxIndicating horizontal edge processing, GyIndicating vertical edge treatment, fx(i, j) represents a value after horizontal edge processing, fy(i, j) represents a vertical edge-processed value, and g (i, j) represents a gray value of an image.
The nuclear robot based on the visual reference piece and the rapid positioning method have the following beneficial effects:
1. the invention introduces the visual reference piece, and can quickly acquire the position information of the target object and the nuclear robot without calibration.
2. The invention adopts visual servo coarse positioning and visual servo fine positioning to complete the rapid positioning of the detection well, and the visual servo coarse positioning and the visual servo fine positioning have the characteristics of high positioning precision, strong noise resistance and high positioning speed.
Drawings
Fig. 1 is a structural diagram of a nuclear robot based on a visual reference piece and a rapid positioning method according to the invention.
Fig. 2 is a flowchart of a nuclear robot and a fast positioning method based on a visual reference piece according to the present invention.
Fig. 3 is a system feedback diagram of a nuclear robot based on a visual reference piece and a rapid positioning method according to the invention.
Fig. 4 is a diagram of fine adjustment in the X direction of a nuclear robot based on a visual reference piece and a fast positioning method according to the present invention.
Fig. 5 is a diagram of fine adjustment in the Y direction of a nuclear robot based on a visual reference piece and a fast positioning method according to the present invention.
Reference numerals: 1-integral motion mechanism, 2-X-Y fine adjustment mechanism, 3-lifting mechanism, 4-control cabinet, 5-nuclear detector, 6-visual servo part, 7-four Minna mother wheels, 6-1-radiation-resistant camera and 6-2-visual reference piece.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a nuclear robot based on a visual reference piece comprises a whole motion mechanism 1, an X-Y fine adjustment mechanism 2, a lifting mechanism 3, a control cabinet 4, a nuclear detector 5, a visual servo part 6 and four Mirco mother wheels 7; the X-Y fine adjustment mechanism 2 is arranged on the top surface of the whole movement mechanism 1, the nuclear detector 5 is arranged on the X-Y fine adjustment mechanism 2 in a penetrating mode, the lifting mechanism 3 is arranged on the outer surface of the detector 5, the visual servo part 6 is movably arranged on the side surface of the lifting mechanism 3, the four Michela female wheels 7 are arranged on four corners of the bottom surface of the whole movement mechanism 1, and the control cabinet 4 is arranged on the outer surface of the lifting mechanism 3.
The visual servo section 6 of the present embodiment includes a radiation-resistant camera 6-1 and a visual reference 6-2, and the radiation-resistant camera 6-1 is disposed on the top surface of the visual reference 6-2.
As shown in fig. 2, a method for quickly positioning a nuclear robot based on a visual reference member includes the following steps:
step S1: the method comprises the following steps of taking video information of a radiation-resistant camera and an out-of-pile monitoring camera carried by a nuclear robot as a guide to enable the nuclear robot to reach a detection well;
step S2: adjusting the pose of the nuclear robot through visual guidance;
step S3: performing visual servo coarse positioning on the nuclear robot through a radiation-resistant camera carried by the nuclear robot and four Machner mother wheels of the nuclear robot;
step S4: and carrying out visual servo accurate positioning on the nuclear robot by using an X-Y fine adjustment mechanism on the platform and a radiation-resistant camera carried by the nuclear robot.
In the implementation of the embodiment, as shown in fig. 3, an operator remotely controls the robot, and the video information of the radiation-resistant camera carried by the robot and the video information of the monitor camera outside the pile are used as guidance to enable the robot to reach the position near the detection well, so as to start the robot to perform automatic positioning, a visual servo coarse positioning system composed of the radiation-resistant camera carried by the robot and the omnidirectional wheel moving mechanism of the robot completes coarse positioning of the detection well, start the visual servo fine positioning of the robot, and a visual servo fine positioning system composed of the radiation-resistant camera carried by the robot and the swing mechanism completes fine positioning of the detection well.
Step S3 of the present embodiment includes the following substeps:
step S3A: detecting an inner ring of the detection well by using Hough circle detection;
step S3B: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the X direction to actual distance difference;
step S3C: controlling the four Michelson master wheels to move in the X direction;
step S3D: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 3 cm; if yes, go to step S3E; if not, returning to the step S3A;
step S3E: detecting an inner ring of the detection well by using Hough circle detection;
step S3F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S3G: controlling the four Michelson master wheels to move in the Y direction;
step S3H: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the Y direction is less than 3 cm; if yes, go to step S4; if not, the step S3E is returned to.
When the embodiment is implemented, the radiation-resistant camera collects the image of the detection well, and the Hough circle detection is adopted to detect the detection well by considering that the detection well is a cylinder and the visual reference piece is also a round object.
Hough circle detection procedure: the image is first subjected to a gray scale process. And (3) adopting a Sobel edge extraction operator to extract the image edge, and setting the gray value of the point in the image f as g (i, j), wherein the Sobel operator is expressed as follows:
Gx=[g(i+1,j-1)+2g(i+1,j)+g(i+1,j+1)]-[g(i-1,j-1)+2g(i-1,j)+g(i-1,j+1)]
Gy=[g(i-1,j+1)+2g(i,j+1)+g(i+1,j+1)]-[g(i-1,j-1)+2g(i,j-1)+g(i+1,j-1)]
written as operator matrix is of the form:
Figure GDA0002573963720000071
each point in the image is convolved with these two operators, operator GxMaximum response to horizontal edges, GyThe response is maximal for vertical edges. And the larger value of the convolution values of the two operators and the image is used as the pixel gray value of the edge image of the point. And the tangential direction information at the point (i, j) can be obtained. In order to remove noise interference, binarization processing is performed after edge detection. The parameter space is suitably quantized to obtain a three-dimensional cumulative array for recording (a, b, r), (x) in accordance with the following representation of the parameter spacei-a)2+(yi-b)2=r2Where r represents the radius of the circle and (a, b) represents the center of the circle. When detecting circles in image space, all (a, b) distances r from each pixel on the edge points are calculated and accumulated in the corresponding array, and after the transformation of all edge points is completed, all the three-dimensional arrays are processedAnd (4) checking the accumulated value, wherein the peak value is the circle center in the corresponding image space, and converting the positioning problem into a position approximation problem between the circle center of the inner ring and the reference feature after detecting the detection well and the visual reference member in the image. A rectangular coordinate system is established by a reference characteristic center, the pixel difference and the direction angle between the circle center of the detection well and the reference characteristic center are calculated, the pixel difference is mapped to an actual difference distance, a computer generates a motion control command after comprehensively processing position data according to the received actual difference distance, the motion control command is transmitted to a robot control box, and the robot control box drives an omnidirectional wheel of a robot to drive a robot unit to move according to the motion control command by driving a servo motor to rotate so that a visual reference part is close to the detection well. And judging whether the distance difference between the current detection well and the visual reference member is within a threshold range, if so, performing fine positioning, otherwise, restarting coarse positioning, and when the distance between the detection well and the detector is smaller than a set threshold, starting a fine positioning system. The visual servo fine positioning system consisting of a radiation-resistant camera carried by the robot and a robot slewing mechanism starts to work
Step S4 of the present embodiment includes the following substeps:
step S4A: detecting an inner ring of the detection well by using Hough circle detection;
step S4B: mapping pixel difference of the center of the inner ring of the detection well and the center of the visual reference piece in the X direction to actual distance difference;
step S4C: controlling the X-Y fine adjustment mechanism to move in the X direction;
step S4D: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 0.5 cm; if yes, go to step S4E; if not, returning to the step S4A;
step S4E: detecting an inner ring of the detection well by using Hough circle detection;
step S4F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S4G: controlling the X-Y fine adjustment mechanism to move in the Y direction;
step S4H: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference piece in the Y direction is less than 0.5 cm; if yes, ending the program; if not, the step S4E is returned to.
The Hough circle detection of the embodiment comprises the following steps:
step SA: collecting an image;
step SB: carrying out graying processing on the image to obtain a gray value of the image;
step SC: carrying out edge extraction on the gray value of the image by using a Solbel operator to obtain an image value after edge extraction;
step SD: carrying out binarization processing and noise filtering processing on the image value after edge extraction to obtain an edge point value;
step SF: quantizing parameter space, establishing a space a axis, b axis and r axis of three axes, calculating all points with distance r from edge point values, and establishing a column matrix [ a, b ]]T
Step SH: for matrices of different distances r [ a, b ]]TRow-column vector accumulation is carried out;
step SI: and taking the accumulated maximum value as the center of a circle in the corresponding image space.
The edge extraction formula of the Solbel operator in the embodiment is as follows:
fx(i,j)=Gx*g(i,j)
fy(i,j)=Gy*g(i,j)
Figure GDA0002573963720000091
in the formula, GxIndicating horizontal edge processing, GyIndicating vertical edge treatment, fx(i, j) represents a value after horizontal edge processing, fy(i, j) represents a vertical edge-processed value, and g (i, j) represents a gray value of an image.
In the implementation of the embodiment, the distance relationship between the detection well and the visual reference member is obtained through Hough circle detection, the robot rotation mechanism is controlled in a servo mode to enable the positioning accuracy of the detection well to reach a set threshold, the coarse positioning is different from the coarse positioning, the servo process is divided into two parts through the visual fine positioning, the distance between the detection well and the visual reference member is enabled to be smaller than the set threshold in the X direction by the first part, as shown in fig. 4, and the distance between the detection well and the visual reference member is enabled to be smaller than the set threshold in the Y direction by the second part, as shown in fig. 5. The method comprises the following specific steps:
firstly, carrying out gray processing on an image, adopting a Sobel edge extraction operator to carry out image edge extraction, and carrying out binarization processing after edge detection.
The parameter space is suitably quantized to obtain a three-dimensional cumulative array for recording (a, b, r)
(xi-a)2+(yi-b)2=r2
Where r represents the radius of the circle and (a, b) represents the center of the circle. When detecting a circle in the image space, calculating all (a, b) distances r between the circle and each pixel on the edge point, simultaneously accumulating in the corresponding array, and after finishing the transformation of all the edge points, checking all accumulated values in the three-dimensional array, wherein the peak value is the center of the circle in the corresponding image space.
And after detecting the detection well and the visual reference member in the image, converting the positioning problem into a position approaching problem between the circle center of the inner ring and the reference feature. A rectangular coordinate system is established by using the reference feature center, as shown in FIG. 4, firstly, in the X direction, the pixel difference in the X direction between the circle center of the detection well and the reference feature center is mapped to the actual phase difference distance by the following formula
Figure GDA0002573963720000101
Wherein, Δ X represents the actual distance difference between the center of the detection well and the center of the visual reference piece in the X direction, Δ pixel represents the pixel difference between the center of the detection well and the center of the visual reference piece in the X direction, d represents the actual diameter of the detection well, and P represents the pixel size of the diameter of the detection well.
And the computer subtracts the given X-direction difference distance threshold value from the received actual X-direction difference distance, generates a motion control instruction after comprehensively processing the position data, and transmits the motion control instruction to the robot control box. And the robot control box drives the servo motor to rotate according to the motion control instruction, and drives the X-Y fine adjustment mechanism to control the motion of the robot in the X direction, so that the visual reference piece is close to the detection well. And judging whether the distance difference between the current detection well and the center of the visual reference member in the X direction is within a threshold range, if so, starting the visual servo in the Y direction, and if not, continuing to adjust the position.
As shown in FIG. 5, the visual servoing in the Y direction is similar to the X direction, and the Y direction pixel difference between the center of the probe well and the center of the reference feature is first mapped to the actual phase difference distance by the following formula
Figure GDA0002573963720000111
Wherein, DeltaY represents the actual distance difference between the center of the detection well and the center of the visual reference piece in the Y direction, Deltapixel represents the pixel difference between the center of the detection well and the center of the visual reference piece in the Y direction, d represents the actual diameter of the detection well, and P represents the pixel size of the diameter of the detection well.
And the computer subtracts the given Y-direction difference distance threshold value from the received actual X-direction difference distance, generates a motion control instruction after comprehensively processing the position data, and transmits the motion control instruction to the robot control box. And the robot control box drives the servo motor to rotate according to the motion control instruction, and drives the X-Y fine adjustment mechanism to control the motion of the robot in the Y direction, so that the visual reference piece is close to the detection well. And judging whether the distance difference between the current detection well and the center of the visual reference member in the Y direction is within a threshold range, if so, ending the visual servo positioning process, and if not, continuing to adjust the position.

Claims (4)

1. A rapid positioning method of a nuclear robot based on a visual reference piece is characterized in that,
the nuclear robot comprises an integral motion mechanism (1), an X-Y fine adjustment mechanism (2), a lifting mechanism (3), a control cabinet (4), a nuclear detector (5), a visual servo part (6) and four Mikan mother wheels (7); the X-Y fine adjustment mechanism (2) is arranged on the top surface of the integral movement mechanism (1), the nuclear detector (5) is arranged on the X-Y fine adjustment mechanism (2) in a penetrating mode, the lifting mechanism (3) is arranged on the outer surface of the nuclear detector (5), the visual servo part (6) is movably arranged on the side surface of the lifting mechanism (3), the four Michelson wheels (7) are arranged on four corners of the bottom surface of the integral movement mechanism (1), and the control cabinet (4) is arranged on the outer surface of the lifting mechanism (3);
the visual servo part (6) comprises a radiation-resistant camera (6-1) and a visual reference piece (6-2), and the radiation-resistant camera (6-1) is arranged on the top surface of the visual reference piece (6-2);
the method comprises the following steps:
step S1: the method comprises the following steps of taking video information of a radiation-resistant camera and an out-of-pile monitoring camera carried by a nuclear robot as a guide to enable the nuclear robot to reach a detection well;
step S2: adjusting the pose of the nuclear robot through visual guidance;
step S3: performing visual servo coarse positioning on the nuclear robot through a radiation-resistant camera carried by the nuclear robot and four Machner mother wheels of the nuclear robot;
step S4: carrying out visual servo accurate positioning on the nuclear robot by utilizing an X-Y fine adjustment mechanism on the platform and a radiation-resistant camera carried by the nuclear robot;
the step S3 includes the following sub-steps:
step S3A: detecting an inner ring of the detection well by using Hough circle detection;
step S3B: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the X direction to actual distance difference;
step S3C: controlling the four Michelson master wheels to move in the X direction;
step S3D: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 3 cm; if yes, go to step S3E; if not, returning to the step S3A;
step S3E: detecting an inner ring of the detection well by using Hough circle detection;
step S3F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S3G: controlling the four Michelson master wheels to move in the Y direction;
step S3H: in the moving process of the four Michelson master wheels, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the Y direction is less than 3 cm; if yes, go to step S4; if not, the process returns to step S3E.
2. The visual reference-based rapid positioning method for nuclear robot according to claim 1, wherein said step S4 comprises the following sub-steps:
step S4A: detecting an inner ring of the detection well by using Hough circle detection;
step S4B: mapping pixel difference of the center of the inner ring of the detection well and the center of the visual reference piece in the X direction to actual distance difference;
step S4C: controlling the X-Y fine adjustment mechanism to move in the X direction;
step S4D: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference member in the X direction is less than 0.5 cm; if yes, go to step S4E; if not, returning to the step S4A;
step S4E: detecting an inner ring of the detection well by using Hough circle detection;
step S4F: mapping pixel difference of the center of the ring in the detection well and the center of the visual reference member in the Y direction to actual distance difference;
step S4G: controlling the X-Y fine adjustment mechanism to move in the Y direction;
step S4H: in the moving process of the X-Y fine adjustment mechanism, judging whether the actual distance difference between the center of the ring in the detection well and the center of the visual reference piece in the Y direction is less than 0.5 cm; if yes, ending the program; if not, the process returns to step S4E.
3. The visual reference-based rapid positioning method for nuclear robots according to claim 1 or 2, characterized in that the Hough circle detection comprises the following steps:
step SA: collecting an image;
step SB: carrying out graying processing on the image to obtain a gray value of the image;
step SC: carrying out edge extraction on the gray value of the image by using a Solbel operator to obtain an image value after edge extraction;
step SD: carrying out binarization processing and noise filtering processing on the image value after edge extraction to obtain an edge point value;
step SF: quantizing parameter space, establishing a space a axis, b axis and r axis of three axes, calculating all points with distance r from edge point values, and establishing a column matrix [ a, b ]]TWherein the matrix [ a, b]TA in (1) represents a coordinate of an a-axis, and b represents a coordinate of a b-axis;
step SH: for matrices of different distances r [ a, b ]]TRow-column vector accumulation is carried out;
step SI: and taking the accumulated maximum value as the center of a circle in the corresponding image space.
4. The visual reference-based rapid positioning method for nuclear robot as claimed in claim 3, wherein the edge extraction formula of the Solbel operator is:
fx(i,j)=Gx*g(i,j)
fy(i,j)=Gy*g(i,j)
Figure FDA0002573963710000031
in the formula, GxIndicating horizontal edge processing, GyIndicating vertical edge treatment, fx(i, j) represents a value after horizontal edge processing, fy(i, j) represents a vertical edge-processed value, and g (i, j) represents a gray value of an image.
CN201910043308.7A 2019-01-17 2019-01-17 Nuclear robot rapid positioning method based on visual reference piece Active CN109760006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910043308.7A CN109760006B (en) 2019-01-17 2019-01-17 Nuclear robot rapid positioning method based on visual reference piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910043308.7A CN109760006B (en) 2019-01-17 2019-01-17 Nuclear robot rapid positioning method based on visual reference piece

Publications (2)

Publication Number Publication Date
CN109760006A CN109760006A (en) 2019-05-17
CN109760006B true CN109760006B (en) 2020-09-29

Family

ID=66452845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910043308.7A Active CN109760006B (en) 2019-01-17 2019-01-17 Nuclear robot rapid positioning method based on visual reference piece

Country Status (1)

Country Link
CN (1) CN109760006B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112157644B (en) * 2020-10-10 2022-03-01 西南科技大学 Auxiliary installation robot system for out-of-pile nuclear detector
CN112192198B (en) * 2020-10-10 2022-12-20 西南科技大学 Auxiliary mounting method for out-of-pile detector
CN113770704A (en) * 2021-09-26 2021-12-10 中国船舶重工集团公司第七一九研究所 Quick installation robot of detector
CN116372941B (en) * 2023-06-05 2023-08-15 北京航空航天大学杭州创新研究院 Robot parameter calibration method and device and wheeled robot

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI408397B (en) * 2008-08-15 2013-09-11 Univ Nat Chiao Tung Automatic navigation device with ultrasonic and computer vision detection and its navigation method
CN201998168U (en) * 2010-12-29 2011-10-05 浙江省电力公司 Visual servo-based accurate tripod head positioning system for movable robot
CN102621525B (en) * 2012-03-01 2014-05-21 西南科技大学 System and method for locating radioactive pollution source based on remote operating device
CN103606386B (en) * 2013-11-19 2016-04-27 中国科学院光电技术研究所 Robot for checking claw of control rod driving mechanism of nuclear power station
CN107731329B (en) * 2017-10-31 2019-09-10 中广核检测技术有限公司 Control rod guide tubes and bundles split pin detects robot and localization method
CN108458707B (en) * 2018-01-22 2020-03-10 西南科技大学 Autonomous positioning method and positioning system of operating robot in multi-pendulous pipeline scene

Also Published As

Publication number Publication date
CN109760006A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109760006B (en) Nuclear robot rapid positioning method based on visual reference piece
WO2015120734A1 (en) Special testing device and method for correcting welding track based on machine vision
US10596677B2 (en) Machine tool control system capable of obtaining workpiece origin and workpiece origin setting method
CN102590245B (en) Intelligent X-ray digital flat imaging detection system device and detection method
CN103759635B (en) The scanning survey robot detection method that a kind of precision is unrelated with robot
CN202471622U (en) X-ray digital panel imaging intelligent detection system device
CN2869887Y (en) Visual servo apparatus for sealed radiation resource leak automatic detection platform
CN109227551B (en) Hand-eye coordinate conversion method for visual positioning robot
CN108007353B (en) Rotary laser profile measuring method, storage device and measuring device thereof
CN108214487A (en) Based on the positioning of the robot target of binocular vision and laser radar and grasping means
CN100417952C (en) Vision servo system and method for automatic leakage detection platform for sealed radioactive source
CN102590244B (en) Multi-shaft movement mechanical arm of X-ray digital flat imaging detection system
CN106352795A (en) Vision measuring device and method for flexible manufacturing
CN105773661A (en) Horizontal robot fixed camera lower workpiece translation and rotation calibration method
CN102590246A (en) Camera-shooting scanning positioning device of X-ray digital flat panel imaging detecting system
CN202471621U (en) Multi-shaft motion mechanical arm of X-ray digital panel imaging detection system
CN108656120B (en) Teaching and processing method based on image contrast
CN111006706B (en) Rotating shaft calibration method based on line laser vision sensor
CN116872216B (en) Robot vision servo operation method based on finite time control
CN212320647U (en) Rotary scanning equipment for circular object
CN106595495A (en) Optical displacement measurement system
CN112629444B (en) Automatic correction method for radiation library cover plate dropping errors based on machine vision
CN112344899B (en) Method for detecting three-dimensional contour of tread of wheel set without centering
CN109029322A (en) A kind of completely new numerical control robot multi-coordinate measuring system and measurement method
CN202533392U (en) Camera-shooting scanning and positioning device for X-ray digital flat panel imaging inspection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant