CN114612775A - Bar full-line tracking method and device based on machine vision - Google Patents

Bar full-line tracking method and device based on machine vision Download PDF

Info

Publication number
CN114612775A
CN114612775A CN202210211558.9A CN202210211558A CN114612775A CN 114612775 A CN114612775 A CN 114612775A CN 202210211558 A CN202210211558 A CN 202210211558A CN 114612775 A CN114612775 A CN 114612775A
Authority
CN
China
Prior art keywords
bar
industrial area
area
image
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210211558.9A
Other languages
Chinese (zh)
Other versions
CN114612775B (en
Inventor
石杰
吴昆鹏
杨朝霖
邓能辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
USTB Design and Research Institute Co Ltd
Original Assignee
USTB Design and Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by USTB Design and Research Institute Co Ltd filed Critical USTB Design and Research Institute Co Ltd
Priority to CN202210211558.9A priority Critical patent/CN114612775B/en
Priority claimed from CN202210211558.9A external-priority patent/CN114612775B/en
Publication of CN114612775A publication Critical patent/CN114612775A/en
Application granted granted Critical
Publication of CN114612775B publication Critical patent/CN114612775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a bar full-line tracking method and device based on machine vision, and relates to the technical field of machine vision detection. The method comprises the following steps: firstly, arranging a plurality of area-array cameras above a running roller way in a distributed manner, and controlling camera image acquisition by utilizing synchronous pulses; then obtaining bar distribution position and size information on a production line through processing algorithms such as perspective transformation, example segmentation, image splicing and the like in sequence; and meanwhile, the rod materials entering the tracking roller way area are numbered and identified in the initial stage, so that the full-line tracking of the rod materials is realized. The distribution condition of the bars on a production line roller way can be comprehensively and visually determined by utilizing a visual identification tracking technology, the dilemma that the bars cannot be tracked one by one in the production process of the bars is solved, the situations of losing, blocking, mixing and the like can be timely found, and the intelligent level of the bar processing line is greatly increased.

Description

Bar full-line tracking method and device based on machine vision
Technical Field
The invention relates to the technical field of machine vision detection, in particular to a bar full-line tracking method and device based on machine vision.
Background
After the bar is produced, the bar is finally collected to the transverse moving cooling bed through the processes of cooling, segmented shearing and the like, in the process, the bar is changed from single orderly advancing to multiple unordered advancing, the operation process is very complex, the phenomena of losing, blocking, mixed supporting and the like can occur in the whole movement process, and when the state of each bar and the source parent batch number of each small section of bar cannot be determined, once the quality problem occurs, the information tracing is difficult to perform. At present, most of bar wires aim at building an intelligent factory, production operation automation and production process visualization are realized, and an intelligent and less-humanized bar workshop is formed. Through automatic system optimization upgrading, improve production efficiency and production stability. In the process, an effective bar support-by-support tracking technology is developed, and the realization of real-time positioning and tracking of the bar becomes more important.
In the prior art, the tracking, judging and troubleshooting are carried out manually, so that the workload is high, the efficiency is low, the accuracy is poor, and the intelligent layout of a production line is greatly limited.
Disclosure of Invention
The invention provides a bar full-line tracking method and device based on machine vision, aiming at the problems of large workload, low efficiency and poor accuracy caused by manual tracking, judgment and troubleshooting in the prior art.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, a method for tracking a bar material all-line based on machine vision is provided, and the method is applied to an electronic device, and comprises the following steps:
s1: arranging a plurality of industrial area-array cameras above a running roller way of a bar post-processing production line, and synchronously acquiring images of the state of bars on the production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars at T moment; wherein, a shooting overlapping area exists among the plurality of industrial area-array cameras; each bar is given an initial number such as 1, 2 and 3.. n when entering a visual tracking roller way for the first time;
s2: carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment to obtain a rectangular area image of the roller way;
s3: inputting the roller bed rectangular area image into a preset example segmentation algorithm, and outputting to obtain foreground coordinate position information of the bar in the image;
s4: sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by utilizing the coordinate information of the foreground bars extracted from the overlapped areas shot by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
s5: if the sawing process operation instruction is received, the serial numbers of all the bars in the position distribution diagram are redistributed and the bars are tracked; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
Optionally, in step S1, the method includes the steps of setting a plurality of industrial area-array cameras above the operation roller table of the bar post-processing production line:
the number of the industrial area-array cameras is calculated by the following formula (1):
Figure BDA0003532653250000021
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
the shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure BDA0003532653250000022
wherein h is the height from the industrial area array camera to the roller way, and theta is the field angle of the industrial area array camera;
calculating the installation spacing distance of the adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
optionally, in step S1, the method for generating a distribution stop-motion picture of the bars on the production line at time T by performing synchronous image acquisition on the states of the bars on the production line through the industrial area-array camera includes:
triggering a plurality of industrial area-array cameras to synchronously acquire by adopting PWM (pulse-width modulation) wave control signals, and generating a distribution stop-motion picture of the bars on the production line at the time T;
the trigger signal is a PWM rising edge signal, and the frequency of the trigger signal is designed according to the following formula (4):
Figure BDA0003532653250000031
wherein f is the frequency of the trigger signal; v is the running speed of the roller way and the unit is mm/s.
Optionally, in step S2, obtaining a rectangular region image of the roller bed by performing perspective transformation on the freeze-frame picture of the distribution of the bars on the production line at time T, including:
aiming at the distribution stop-motion picture of the bar on the production line at the T moment, the 4 vertex position coordinates of the roller path quadrilateral area in the dependency graph
Figure BDA0003532653250000032
Carrying out perspective transformation to obtain a rectangular area image only comprising the roller way;
wherein the perspective coordinate transformation matrix is:
Figure BDA0003532653250000033
wherein the content of the first and second substances,
Figure BDA0003532653250000034
acquiring a start-stop pixel of a left longitudinal coordinate of a roller path area in an image for an industrial area-array camera;
Figure BDA0003532653250000035
Figure BDA0003532653250000036
for industrial area array camera miningStarting and stopping pixels of the right vertical coordinate of the roller way area in the collected image; w is the width of an image acquired by the industrial area-array camera; hIs the image height after perspective transformation.
Optionally, in step S4, the step of inputting the roller matrix area image into a preset example segmentation algorithm, and outputting foreground coordinate position information of the bar in the image includes:
inputting the roller matrix area image into a preset example segmentation algorithm, outputting the roller matrix area image through a segmentation algorithm model to obtain an example segmentation image, distinguishing each bar object, and calculating pixel position and size information of a single bar; example segmentation algorithms include, but are not limited to, SOLO, YOLACT algorithms.
Optionally, in step S4, the same bar images captured by adjacent cameras in the multiple industrial area-array cameras are sequentially spliced by using the coordinate information of the foreground bars extracted by capturing the overlapping area by the adjacent cameras in the multiple industrial area-array cameras, so as to obtain a position distribution map of each bar at the T moment of the whole production line, and complete the full-line tracking of the bars, including:
s41: acquiring example segmentation images of two adjacent cameras in a plurality of industrial area-array camera shots;
s42: taking 1/2 position of the overlapping region of the example segmentation images of two adjacent cameras as a splicing point, and merging the example segmentation images of the two cameras;
s43: defining bar objects in front and back two-camera images at a splicing point as the same bar object in the spliced image; and splicing the example segmentation maps of the industrial area-array cameras in sequence to obtain the position distribution map of each bar at the T moment of the whole production line, and completing the full-line tracking of the bars.
Optionally, in step S5, if the indication of the operation of the sawing process is received, the reassigning the number of each bar in the position distribution map includes:
and when receiving the operation instruction of the sawing process, receiving a sawing signal sent by the field automation primary system, and redistributing the numbers of the bars in the bar distribution diagram.
Optionally, the reassignment of the number of the bars in the bar profile includes:
each bar is endowed with an initial number when entering a visual tracking roller way for the first time, and the numbers are 1, 2 and 3.. n; when receiving a sawing process operation instruction, the bar enters a sawing position, receives a sawing signal issued by the automatic primary system, and assigns new numbers to the bars before and after the sawing position when receiving the sawing signal;
the numbers of the bars before the sawing positions are 1-1, 2-1 and 3-1 … n-1, and the numbers of the bars after the sawing positions are 1-2, 2-2 and 3-2 … n-2; the bar number is a unique identifier for distinguishing tracking of a single bar.
In one aspect, a bar full-line tracking device based on machine vision is provided, and the device is applied to electronic equipment, and the device includes:
the image acquisition module is used for arranging a plurality of industrial area-array cameras above a running roller way of the bar post-processing production line, and carrying out synchronous image acquisition on the state of the bars on the production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars on the production line at the moment T; the method comprises the following steps that a shooting overlapping area exists among a plurality of industrial area-array cameras;
the perspective transformation module is used for carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment to obtain a rectangular area image of the roller way;
the example segmentation module is used for inputting the roller bed rectangular area image into a preset example segmentation algorithm and outputting the information to obtain the foreground coordinate position of the bar in the image;
the image splicing module is used for sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by utilizing the coordinate information of the foreground bars extracted from the overlapped area shot by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
the process sawing module is used for redistributing the serial numbers of each bar in the position distribution diagram and tracking the bars if a sawing process operation instruction is received; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
Optionally, the image acquisition module is further configured to:
the number of the industrial area-array cameras is calculated by the following formula (1):
Figure BDA0003532653250000051
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
the shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure BDA0003532653250000052
h is the height from the industrial area-array camera to the roller way, and theta is the field angle of the industrial area-array camera;
calculating the installation spacing distance of the adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
in one aspect, an electronic device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the above-mentioned method for machine vision-based all-line bar tracking.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the above-mentioned one machine vision-based bar full-line tracking method.
The technical scheme of the embodiment of the invention at least has the following beneficial effects:
in the scheme, the machine vision system replaces manual judgment, the area array cameras are arranged at the cooling rack, the sawing point, the segmented shear transportation roller way and the collection cooling bed, the field of vision of the cameras covers the whole post-processing area, and after the images collected by the cameras are spliced and processed, the position of each bar is tracked in real time and the source of the parent metal of the bar is positioned, so that the automatic tracking of the bars one by one is finally realized. The mode is convenient to install and deploy, the motion state of the bar on the production line obtained by extraction is displayed more visually, the bar has higher accuracy and interference resistance, the application range is wide, and the technical support is provided for realizing the one-by-one tracking of the bar production line.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart implementation environment diagram of a bar full-line tracking method based on machine vision according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for tracking a bar material along a whole line based on machine vision according to an embodiment of the present invention;
FIG. 3 is a diagram of a field installation position of a single camera of a bar full-line tracking method based on machine vision according to an embodiment of the present invention;
fig. 4 is a schematic view illustrating field distribution and control of a bar full-line tracking method based on machine vision according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an example segmentation result splicing process of a bar full-line tracking method based on machine vision according to an embodiment of the present invention;
fig. 6 is a schematic illustration of the number distribution of the rods in the sawing process of the rod all-line tracking method based on machine vision according to the embodiment of the present invention;
fig. 7 is a block diagram of an apparatus of a bar full-line tracking apparatus based on machine vision according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a bar full-line tracking method based on machine vision, which can be realized by electronic equipment, wherein the electronic equipment can be a terminal or a server. The process flow of the method for tracking the whole line of the bar based on the machine vision as shown in fig. 1 may include the following steps:
s101: arranging a plurality of industrial area-array cameras above a running roller way of a bar post-processing production line, and synchronously acquiring images of the state of bars on the production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars at T moment; wherein, a shooting overlapping area exists among the plurality of industrial area-array cameras;
s102: carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment to obtain a rectangular area image of the roller way;
s103: inputting the roller bed rectangular area image into a preset example segmentation algorithm, and outputting to obtain foreground coordinate position information of the bar in the image;
s104: sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by utilizing the coordinate information of the foreground bars extracted from the overlapped areas shot by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
s105: if the sawing process operation instruction is received, redistributing the numbers of each bar in the position distribution diagram and tracking the bars; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
Optionally, in step S101, a plurality of industrial area-array cameras are disposed above the operation roller table of the bar post-processing production line, and the method includes:
the number of the industrial area-array cameras is calculated by the following formula (1):
Figure BDA0003532653250000071
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
the shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure BDA0003532653250000072
wherein h is the height from the industrial area array camera to the roller way, and theta is the field angle of the industrial area array camera;
calculating the installation spacing distance of the adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
optionally, in step S101, performing synchronous image acquisition on the state of the rod on the production line by using the industrial area-array camera, and generating a distribution freeze-frame picture of the rod on the production line at time T, including:
triggering a plurality of industrial area-array cameras to synchronously acquire by adopting PWM (pulse-width modulation) wave control signals, and generating a distribution stop-motion picture of the bar at the T moment;
the trigger signal is a PWM rising edge signal, and the frequency of the trigger signal is designed according to the following formula (4):
Figure BDA0003532653250000081
wherein f is the frequency of the trigger signal; v is the running speed of the roller way and the unit is mm/s.
Optionally, in step S102, obtaining a rectangular area image of the roller bed by performing perspective transformation on the stop-motion picture of the distribution of the bars at time T, where the step includes:
aiming at the distribution stop-motion picture of the bar at the T moment, 4 vertex position coordinates of the roller path quadrilateral area in the dependency graph
Figure BDA0003532653250000082
Carrying out perspective transformation to obtain a rectangular area image only comprising the roller way;
wherein the perspective coordinate transformation matrix is:
Figure BDA0003532653250000083
wherein the content of the first and second substances,
Figure BDA0003532653250000084
acquiring a start-stop pixel of a left longitudinal coordinate of a roller path area in an image for an industrial area-array camera;
Figure BDA0003532653250000085
Figure BDA0003532653250000086
acquiring a start-stop pixel of a right longitudinal coordinate of a roller path area in an image for an industrial area-array camera; w is the width of an image acquired by the industrial area-array camera; hIs the image height after perspective transformation.
Optionally, in step S103, inputting the roller bed rectangular region image into a preset example segmentation algorithm, and outputting foreground coordinate position information of the bar in the image, where the foreground coordinate position information includes:
inputting the roller matrix area image into a preset example segmentation algorithm, outputting the roller matrix area image through a segmentation algorithm model to obtain an example segmentation image, distinguishing each bar object, and calculating pixel position and size information of a single bar; example segmentation algorithms include, but are not limited to, SOLO, YOLACT algorithms.
Optionally, in step S104, the same bar images captured by adjacent cameras in the multiple industrial area-array cameras are sequentially spliced by using the coordinate information of the foreground bar extracted by capturing the overlapping area by the adjacent cameras in the multiple industrial area-array cameras, so as to obtain a position distribution map of each bar at the T moment of the whole production line, and complete the full-line tracking of the bar, including:
s141: acquiring example segmentation images of two adjacent cameras in a plurality of industrial area-array camera shots;
s142: taking 1/2 position of the overlapping region of the example segmentation images of two adjacent cameras as a splicing point, and merging the example segmentation images of the two cameras;
s143: bar objects in front and back two camera images at a splicing point are defined as the same bar object in the spliced image; and splicing the example segmentation maps of the industrial area-array cameras in sequence to obtain the position distribution map of each bar at the T moment of the whole production line, and completing the full-line tracking of the bars.
Optionally, in step S5, if the indication of the operation of the sawing process is received, the reassigning the number of each bar in the position distribution map includes:
and when receiving the operation instruction of the sawing process, receiving a sawing signal sent by the field automation primary system, and redistributing the numbers of the bars in the bar distribution diagram.
Optionally, the reassignment of the number of the bars in the bar profile includes:
each bar is endowed with an initial number when entering a visual tracking roller way for the first time, and the numbers are 1, 2 and 3.. n; when receiving a sawing process operation instruction, the bar enters a sawing position, receives a sawing signal issued by the automatic primary system, and assigns new numbers to the bars before and after the sawing position when receiving the sawing signal;
the numbers of the bars before the sawing position are 1-1, 2-1 and 3-1 … n-1, and the numbers of the bars after the sawing position are 1-2, 2-2 and 3-2 … n-2; the bar number is a unique identifier for distinguishing tracking of a single bar.
In the embodiment of the invention, a machine vision system replaces manual judgment, an area array camera is arranged at a cooling rack, a sawing point, a segmented shear transportation roller way and a collection cooling bed, the visual field of the camera covers the whole post-processing area, and after the images collected by the camera are spliced and processed, the position of each bar is tracked in real time and the source of the parent metal of the bar is positioned, so that the automatic tracking of the bars one by one is finally realized. The mode is convenient to install and deploy, the motion state of the bar on the production line obtained by extraction is displayed more visually, the bar has higher accuracy and interference resistance, the application range is wide, and the technical support is provided for realizing the one-by-one tracking of the bar production line.
The embodiment of the invention provides a bar full-line tracking method based on machine vision, which can be realized by electronic equipment, wherein the electronic equipment can be a terminal or a server. The process flow of the method for tracking the whole line of the bar based on the machine vision as shown in fig. 2 may include the following steps:
s201: arranging a plurality of industrial area-array cameras above a running roller way of a bar post-processing production line, and synchronously acquiring images of the states of bars on the production line through the industrial area-array cameras to generate a distribution stop-check picture of the bars on the production line at the moment T; wherein, there is the shooting overlapping region between a plurality of industry area array camera.
In one possible embodiment, the invention uses visual recognition technology to track the entire line of bar post-processing lines from cooling, sawing to traversing cold beds, on a branch-by-branch basis. The length of the tracking roller way is 234m, the speed of the roller way is 1500mm/s, the diameter range of the bar is 45-130 mm, and the length range of the bar is 4-10 m.
An industrial area-array camera, 800 ten thousand pixels and a 4mm lens are selected, and the designed irradiation range is within 10 m. The cameras are sequentially arranged at the position of the roller way to be monitored along the running direction of the bars, the camera support is arranged beside the roller way, so that the cameras are positioned right above the center of the roller way, as shown in fig. 3, the moving bars can be clearly shot by the cameras in shooting images, and the positions of the bars can be easily identified from the images.
And arranging industrial area-array cameras at a fixed distance above a running roller way of a bar post-processing production line to acquire images of the state of the roller way and the bar, wherein certain overlapping exists in shooting areas among the cameras.
In a possible embodiment, the number of the plurality of industrial area-array cameras is calculated by the following formula (1):
Figure BDA0003532653250000101
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the overlapping proportion of the shooting ranges of adjacent cameras and is generally set to be 4% -6% of the image width.
The shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure BDA0003532653250000102
wherein h is the height from the industrial area array camera to the roller way, and theta is the field angle of the industrial area array camera;
calculating the installation spacing distance of the adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。、
in a feasible implementation mode, a PWM wave control signal is adopted to trigger a plurality of industrial area-array cameras to synchronously acquire, and a distribution stop-motion picture of bars on a production line at T moment is generated; the overlapping proportion γ here is generally set to 4% -6% of the image width.
In this embodiment, the number of cameras to be used is about 25, and the distance between the two cameras is 10m, calculated according to the above formula. Fig. 4 is a schematic view illustrating field distribution and control of a camera according to an embodiment of the present invention.
In one possible embodiment, the trigger signal is a PWM rising edge signal, and the frequency of the trigger signal is designed according to the following equation (4):
Figure BDA0003532653250000111
wherein f is the frequency of the trigger signal; v is the running speed of the roller way and the unit is mm/s.
In this embodiment, the trigger frequency of the camera is 15 frames/s.
S202: and carrying out perspective transformation on the distribution stop-motion picture of the bars on the production line at the T moment to obtain a rectangular area image of the roller way.
In a possible embodiment, for the distribution freeze frame of the bar material at the T moment, the 4 vertex position coordinates of the roller path quadrilateral area in the graph are depended
Figure BDA0003532653250000112
Carrying out perspective transformation to obtain a rectangular area image only comprising the roller way;
wherein the perspective coordinate transformation matrix is:
Figure BDA0003532653250000113
wherein the content of the first and second substances,
Figure BDA0003532653250000114
acquiring starting and ending pixels of the longitudinal coordinates on the left side of the roller way area in an image for an industrial area-array camera;
Figure BDA0003532653250000115
Figure BDA0003532653250000116
acquiring a start-stop pixel of a right longitudinal coordinate of a roller path area in an image for an industrial area-array camera; w is the width of an image acquired by the industrial area-array camera; hIs the image height after perspective transformation.
In a feasible implementation mode, perspective transformation is carried out on images acquired by each camera at the time T according to the vertex position coordinates of the quadrilateral areas of the roller ways in the graph to obtain rectangular area images only containing the roller ways, and the sizes of the images after transformation of the cameras are still consistent.
S203: and inputting the roller matrix area image into a preset example segmentation algorithm, and outputting to obtain the foreground coordinate position information of the bar in the image.
In a feasible implementation mode, the roller matrix area image is input into a preset example segmentation algorithm, an example segmentation graph is obtained through the output of a segmentation algorithm model, each bar object is distinguished, and the pixel position and size information of a single bar are calculated.
In a feasible implementation mode, the foreground segmentation is performed on the rectangular area image, the bar foreground instance segmentation algorithm includes, but is not limited to, algorithms such as SOLO, YOLACT and the like, the segmentation algorithm model is input into the camera collected image, the output obtained instance segmentation graph can distinguish each bar object, and the coordinates of the central point of a single bar and the length and diameter of the bar can be calculated.
S204: acquiring example segmentation images of two adjacent cameras in a plurality of industrial area-array camera shots;
s205: taking 1/2 position of the overlapping region of the example segmentation images of two adjacent cameras as a splicing point, and merging the example segmentation images of the two cameras;
s206: and sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by utilizing the coordinate information of the foreground bars extracted by shooting the overlapping area by the adjacent cameras in the industrial area-array cameras to obtain the position distribution map of each bar at the T moment of the whole production line.
In a possible implementation manner, as shown in fig. 5, a schematic diagram of a stitching process of a segmentation result of a camera image instance provided by the embodiment of the present invention is shown.
In a possible embodiment, the distribution map of the position of each bar at the time T of the whole production line comprises: center point coordinates and bar size specification
Preferably, after step S206, the method further includes:
s207: if the sawing process operation instruction is received, the serial numbers of all the bars in the position distribution diagram are redistributed, the bars are tracked, and the process sawing is carried out; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
In one possible embodiment, each bar is given an initial number, such as 1, 2, 3.. n, when it first enters the visual tracking roller bed; when receiving a sawing process operation instruction, the bar enters a sawing position, receives a sawing signal issued by the automatic primary system, and assigns new numbers to the bars before and after the sawing position when receiving the signal;
as shown in fig. 6, a schematic diagram of the numbers of the bars in the sawing process is allocated, wherein the numbers of the bars before the sawing position are 1-1, 2-1 and 3-1 … n-1, and the numbers of the bars after the sawing position are 1-2, 2-2 and 3-2 … n-2; the bar number is a unique identifier for distinguishing tracking of a single bar.
In the embodiment of the invention, a machine vision system replaces manual judgment, an area array camera is arranged at a cooling rack, a sawing point, a segmented shear transportation roller way and a collection cooling bed, the visual field of the camera covers the whole post-processing area, and after the images collected by the camera are spliced and processed, the position of each bar is tracked in real time and the source of the parent metal of the bar is positioned, so that the automatic tracking of the bars one by one is finally realized. The mode is convenient to install and deploy, the motion state of the bar on the production line obtained by extraction is displayed more visually, the bar has higher accuracy and interference resistance, the application range is wide, and the technical support is provided for realizing the one-by-one tracking of the bar production line.
Fig. 7 is a block diagram illustrating a machine vision based full-line tracking apparatus for bars, according to an exemplary embodiment. Referring to fig. 7, the apparatus 300 includes
The image acquisition module 310 is used for arranging a plurality of industrial area-array cameras above a running roller way of the bar post-processing production line, and performing synchronous image acquisition on the state of the bars on the production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars on the production line at the moment T; wherein, a shooting overlapping area exists among the plurality of industrial area-array cameras;
the perspective transformation module 320 is used for performing perspective transformation on the distribution stop-motion picture of the bars at the time T to obtain a rectangular area image of the roller way;
the example segmentation module 330 is configured to input the roller bed rectangular region image to a preset example segmentation algorithm, and output foreground coordinate position information of the bar in the image;
the image splicing module 340 is configured to sequentially splice the same bar images shot by the adjacent cameras in the industrial area-array cameras by using the foreground bar coordinate information extracted from the overlapping area shot by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
the process sawing module 350 is configured to, if a sawing process operation instruction is received, reassign the number of each bar in the position distribution map and track the bar; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
Optionally, the image acquisition module 310 is further configured to: the number of the plurality of industrial area-array cameras is calculated by the following formula (1):
Figure BDA0003532653250000131
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
the shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure BDA0003532653250000132
wherein h is the height from the industrial area array camera to the roller way, and theta is the field angle of the industrial area array camera;
calculating the installation spacing distance of the adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
optionally, the image acquisition module 310 is further configured to: triggering a plurality of industrial area-array cameras to synchronously acquire by adopting PWM (pulse-width modulation) wave control signals, and generating a distribution stop-motion picture of the bar at the T moment;
the trigger signal is a PWM rising edge signal, and the frequency of the trigger signal is designed according to the following formula (4):
Figure BDA0003532653250000141
wherein f is the frequency of the trigger signal; v is the running speed of the roller way and the unit is mm/s.
Optionally, the perspective transformation module 320 is further configured to rely on the 4 vertex position coordinates of the roller path quadrilateral region in the graph for the distribution freeze-frame of the bar at time T
Figure BDA0003532653250000142
Figure BDA0003532653250000143
Carrying out perspective transformation to obtain a rectangular area image only comprising the roller way;
wherein the perspective coordinate transformation matrix is:
Figure BDA0003532653250000144
wherein the content of the first and second substances,
Figure BDA0003532653250000145
acquiring a start-stop pixel of a left longitudinal coordinate of a roller path area in an image for an industrial area-array camera;
Figure BDA0003532653250000146
Figure BDA0003532653250000147
acquiring a start-stop pixel of a right longitudinal coordinate of a roller path area in an image for an industrial area-array camera; w is the width of an image acquired by the industrial area-array camera; hIs the image height after perspective transformation.
Optionally, the example segmentation module 330 is further configured to input the roller bed rectangular region image into a preset example segmentation algorithm, obtain an example segmentation map through segmentation algorithm model output, distinguish an object of each bar, and calculate pixel position and size information of a single bar; example segmentation algorithms include, but are not limited to, SOLO, YOLACT algorithms.
Optionally, the image stitching module 340 is further configured to obtain example segmentation images of two adjacent cameras in multiple industrial area-array camera shots;
taking 1/2 position of the overlapping region of the example segmentation images of two adjacent cameras as a splicing point, and merging the example segmentation images of the two cameras;
bar objects in front and back two camera images at a splicing point are defined as the same bar object in the spliced image; and splicing the example segmentation maps of the industrial area-array cameras in sequence to obtain the position distribution map of each bar at the T moment of the whole production line, and completing the full-line tracking of the bars.
Optionally, the process sawing module 350 is further configured to receive a sawing signal sent from the site automation primary system when the sawing process operation instruction is received, and reassign the numbers of the bars in the bar distribution map.
Optionally, the process sawing module 350 is further configured to assign an initial number, such as 1, 2, 3.. n, to each bar when the bar first enters the visual tracking roller; when receiving a sawing process operation instruction, the bar enters a sawing position, receives a sawing signal issued by the automatic primary system, and assigns new numbers to the bars before and after the sawing position when receiving the sawing signal;
the numbers of the bars before the sawing position are 1-1, 2-1 and 3-1 … n-1, and the numbers of the bars after the sawing position are 1-2, 2-2 and 3-2 … n-2; the bar number is a unique identifier for distinguishing tracking of a single bar.
Fig. 8 is a schematic structural diagram of an electronic device 400 according to an embodiment of the present invention, where the electronic device 400 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 401 and one or more memories 402, where at least one instruction is stored in the memory 402, and the at least one instruction is loaded and executed by the processor 401 to implement the following steps of the machine vision-based bar material all-line tracking method:
s1: through a plurality of industrial area-array cameras arranged above a visual tracking roller way of a bar post-processing production line, synchronous image acquisition is carried out on the state of bars on the production line, and a distribution stop-motion picture of the bars on the production line at the moment T is generated; wherein shooting overlapping areas exist among the plurality of industrial area-array cameras;
s2: obtaining a roller bed matrix area image by carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment;
s3: inputting the roller matrix area image into a preset example segmentation algorithm, and outputting to obtain foreground coordinate position information of the bar in the image;
s4: sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by utilizing the coordinate information of the foreground bars extracted from the overlapped areas shot by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
s5: if the sawing process operation instruction is received, the serial numbers of all the bars in the position distribution diagram are redistributed and the bars are tracked; and if no sawing process operation instruction exists, the whole-line tracking of the bar in the production line is completed.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in a terminal to perform the machine vision-based bar full-line tracking method described above. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. A bar full-line tracking method based on machine vision is characterized by comprising the following steps:
s1: arranging a plurality of industrial area-array cameras above a running roller way of a bar post-processing production line, and carrying out synchronous image acquisition on the state of bars on the post-processing production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars at T moment; wherein shooting overlapping areas exist among the plurality of industrial area-array cameras;
s2: carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment to obtain a rectangular area image of the roller way;
s3: inputting the roller bed rectangular area image into a preset example segmentation algorithm, and outputting to obtain foreground coordinate position information of the bar in the image;
s4: sequentially splicing the same bar images shot by the adjacent cameras in the plurality of industrial area-array cameras by using the coordinate information of the foreground bars extracted by shooting the overlapping area by the adjacent cameras in the plurality of industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole post-processing production line;
s5: if the sawing process operation instruction is received, the serial numbers of all the bars in the position distribution diagram are distributed and the process sawing is carried out; and if no sawing process operation instruction exists, the whole-line tracking of the bars of the post-treatment production line is completed.
2. The method for tracking the whole bar according to claim 1, wherein in step S1, a plurality of industrial area-array cameras are disposed above the operation roller table of the bar post-processing production line, and the method comprises:
the number of the industrial area-array cameras is calculated by the following formula (1):
Figure FDA0003532653240000011
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
calculating the shooting range of a single industrial area-array camera according to the following formula (2):
Figure FDA0003532653240000012
h is the height between the industrial area-array camera and the roller way, and theta is the field angle of the industrial area-array camera;
calculating the installation spacing distance d between adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
3. the method according to claim 1, wherein in step S1, the industrial area-array camera performs synchronous image acquisition on the status of the bars on the post-processing production line to generate a stop-motion picture of the distribution of the bars at time T, and the method includes:
triggering the industrial area-array cameras to synchronously acquire by adopting PWM (pulse-width modulation) wave control signals, and generating a distribution stop-motion picture of the bar at the T moment;
the trigger signal is a PWM rising edge signal, and the frequency of the trigger signal is designed according to the following formula (4):
Figure FDA0003532653240000021
wherein f is the frequency of the trigger signal; v is the running speed of the roller way and the unit is mm/s.
4. The method according to claim 1, wherein the step S2 of obtaining the rectangular area image of the roller track by performing perspective transformation on the freeze-frame picture of the distribution of the bar at the time T includes:
aiming at the distribution stop-motion picture of the T-time bar, 4 vertex position coordinates of a roller path quadrilateral area in a dependency graph
Figure FDA0003532653240000022
Carrying out perspective transformation to obtain a rectangular area image only comprising the roller way;
wherein the perspective coordinate transformation matrix is:
Figure FDA0003532653240000023
wherein the content of the first and second substances,
Figure FDA0003532653240000024
acquiring a start-stop pixel of a left longitudinal coordinate of a roller path area in an image for an industrial area-array camera;
Figure FDA0003532653240000025
Figure FDA0003532653240000026
acquiring a start-stop pixel of a right longitudinal coordinate of a roller path area in an image for an industrial area-array camera; w is the width of an image acquired by the industrial area-array camera; and H' is the image height after perspective transformation.
5. The method for tracking the whole bar according to claim 1, wherein in step S3, the step of inputting the roller matrix area image into a preset example segmentation algorithm and outputting foreground coordinate position information of the bar in the image includes:
inputting the roller bed rectangular area image into a preset example segmentation algorithm, outputting the roller bed rectangular area image through the segmentation algorithm model to obtain an example segmentation image, distinguishing each bar object, and calculating pixel position and size information of a single bar; the example partitioning algorithm includes, but is not limited to, SOLO, YOLACT algorithms.
6. The rod full-line tracking method based on machine vision according to claim 1, wherein in step S4, the same rod images captured by adjacent cameras in the plurality of industrial area-array cameras are sequentially spliced by using the foreground rod coordinate information extracted by capturing overlapping regions by adjacent cameras in the plurality of industrial area-array cameras, so as to obtain a position distribution map of each rod at the time T of the whole post-processing production line, thereby completing the full-line tracking of the rod, including:
s41: acquiring example segmentation images of two adjacent cameras in the multiple industrial area-array cameras;
s42: merging the example segmentation images of the two cameras by taking 1/2 positions of the overlapping areas of the example segmentation images of the two adjacent cameras as splicing points;
s43: defining bar objects in the front camera image and the back camera image at the splicing point as the same bar object in the spliced images; and sequentially splicing the example segmentation maps of the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line, and completing the full-line tracking of the bars.
7. The method for tracking all bars on line based on machine vision according to claim 1, wherein in step S5, if the sawing process operation instruction is received, the reassigning of the number of each bar in the position profile and the process sawing are performed, including:
and when receiving the operation instruction of the sawing process, receiving a sawing signal sent by the field automation primary system, and redistributing the serial numbers of the bars in the bar distribution diagram and performing process sawing.
8. The machine vision based all-wire rod tracking method according to claim 7, wherein the reassigning of the number of the rods in the rod profile and the process sawing comprises:
each bar is endowed with an initial number when entering a visual tracking roller way for the first time, and the numbers are 1, 2 and 3.. n; when receiving a sawing process operation instruction, the bar enters a sawing position, receives a sawing signal issued by the automatic primary system, and assigns new numbers to the bars before and after the sawing position when receiving the sawing signal;
the numbers of the bars before the sawing position are 1-1, 2-1 and 3-1 … n-1, and the numbers of the bars after the sawing position are 1-2, 2-2 and 3-2 … n-2; the bar number is the only identification for distinguishing the single bar.
9. A full-line bar tracking device based on machine vision is characterized in that the device comprises:
the system comprises an image acquisition module, a data acquisition module and a data processing module, wherein the image acquisition module is used for arranging a plurality of industrial area-array cameras above a running roller way of a bar post-processing production line, and performing synchronous image acquisition on the state of bars on the production line through the industrial area-array cameras to generate a distribution stop-motion picture of the bars at T moment; wherein shooting overlapping areas exist among the plurality of industrial area-array cameras;
the perspective transformation module is used for carrying out perspective transformation on the distribution stop-motion picture of the bar at the T moment to obtain a rectangular area image of the roller way;
the example segmentation module is used for inputting the roller bed rectangular area image into a preset example segmentation algorithm and outputting the image to obtain foreground coordinate position information of the bar in the image;
the image splicing module is used for sequentially splicing the same bar images shot by the adjacent cameras in the industrial area-array cameras by using the coordinate information of the foreground bars extracted by shooting the overlapping area by the adjacent cameras in the industrial area-array cameras to obtain a position distribution map of each bar at the T moment of the whole production line;
the process sawing module is used for redistributing the serial numbers of each bar in the position distribution diagram and performing process sawing if the sawing process operation instruction is received; and if no sawing process operation instruction exists, the whole bar tracking of the production line is completed.
10. The machine-vision-based all-wire bar tracking device of claim 9, wherein the image acquisition module is further configured to:
the number of the industrial area-array cameras is calculated by the following formula (1):
Figure FDA0003532653240000041
wherein N is the number of industrial area-array cameras; l is the length of the whole bar post-processing production line, L is the shooting range of a single industrial area-array camera, and gamma is the coincidence proportion of the shooting ranges between adjacent cameras;
the shooting range of a single industrial area-array camera is calculated according to the following formula (2):
Figure FDA0003532653240000042
h is the height between the industrial area-array camera and the roller way, and theta is the field angle of the industrial area-array camera;
calculating the installation spacing distance d between adjacent industrial area-array cameras according to the following formula (3):
d=l(1-γ) (3)。
CN202210211558.9A 2022-03-04 Machine vision-based bar full-line tracking method and device Active CN114612775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210211558.9A CN114612775B (en) 2022-03-04 Machine vision-based bar full-line tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210211558.9A CN114612775B (en) 2022-03-04 Machine vision-based bar full-line tracking method and device

Publications (2)

Publication Number Publication Date
CN114612775A true CN114612775A (en) 2022-06-10
CN114612775B CN114612775B (en) 2024-07-05

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598132A (en) * 2022-10-10 2023-01-13 东北大学(Cn) Bar counting and alignment detection device and method based on machine vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190616A (en) * 2018-08-03 2019-01-11 东北大学 A kind of online Vision Tracking of hot rolled steel plate based on feature identification
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN111521838A (en) * 2020-04-24 2020-08-11 北京科技大学 Hot-rolled coil speed measuring method combining linear-area array camera
CN111553236A (en) * 2020-04-23 2020-08-18 福建农林大学 Road foreground image-based pavement disease target detection and example segmentation method
CN112465937A (en) * 2020-11-03 2021-03-09 影石创新科技股份有限公司 Method for generating stop motion animation, computer readable storage medium and computer device
CN113139900A (en) * 2021-04-01 2021-07-20 北京科技大学设计研究院有限公司 Method for acquiring complete surface image of bar

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190616A (en) * 2018-08-03 2019-01-11 东北大学 A kind of online Vision Tracking of hot rolled steel plate based on feature identification
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN111553236A (en) * 2020-04-23 2020-08-18 福建农林大学 Road foreground image-based pavement disease target detection and example segmentation method
CN111521838A (en) * 2020-04-24 2020-08-11 北京科技大学 Hot-rolled coil speed measuring method combining linear-area array camera
CN112465937A (en) * 2020-11-03 2021-03-09 影石创新科技股份有限公司 Method for generating stop motion animation, computer readable storage medium and computer device
CN113139900A (en) * 2021-04-01 2021-07-20 北京科技大学设计研究院有限公司 Method for acquiring complete surface image of bar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史阳阳 等: "基于DSP的无人机自主着舰识别算法", 太赫兹科学与电子信息学报, vol. 11, no. 05, 25 October 2013 (2013-10-25), pages 712 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598132A (en) * 2022-10-10 2023-01-13 东北大学(Cn) Bar counting and alignment detection device and method based on machine vision
CN115598132B (en) * 2022-10-10 2024-06-07 东北大学 Bar counting and alignment detection device and method based on machine vision

Similar Documents

Publication Publication Date Title
CN109388093B (en) Robot attitude control method and system based on line feature recognition and robot
US8730396B2 (en) Capturing events of interest by spatio-temporal video analysis
CN110266938B (en) Transformer substation equipment intelligent shooting method and device based on deep learning
CN113284154B (en) Steel coil end face image segmentation method and device and electronic equipment
CN105830426A (en) Video generating method and device of video generating system
CN112132853B (en) Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN113365028B (en) Method, device and system for generating routing inspection path
US20230060211A1 (en) System and Method for Tracking Moving Objects by Video Data
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
EP3800575B1 (en) Visual camera-based method for identifying edge of self-shadowing object, device, and vehicle
CN110349116B (en) Algorithm for splicing area-array camera pictures
CN109948436B (en) Method and device for monitoring vehicles on road
CN111242066A (en) Large-size image target detection method and device and computer readable storage medium
CN114612775B (en) Machine vision-based bar full-line tracking method and device
Wang et al. Improving facade parsing with vision transformers and line integration
CN114612775A (en) Bar full-line tracking method and device based on machine vision
CN113570587A (en) Photovoltaic cell broken grid detection method and system based on computer vision
Shamsollahi et al. A timely object recognition method for construction using the mask R-CNN architecture
CN110930437B (en) Target tracking method and device
CN111951328A (en) Object position detection method, device, equipment and storage medium
CN111862206A (en) Visual positioning method and device, electronic equipment and readable storage medium
CN113095345A (en) Data matching method and device and data processing equipment
CN110674778B (en) High-resolution video image target detection method and device
CN113569752B (en) Lane line structure identification method, device, equipment and medium
CN115439792A (en) Monitoring method and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant