CN110321824B - Binding determination method and device based on neural network - Google Patents

Binding determination method and device based on neural network Download PDF

Info

Publication number
CN110321824B
CN110321824B CN201910553334.4A CN201910553334A CN110321824B CN 110321824 B CN110321824 B CN 110321824B CN 201910553334 A CN201910553334 A CN 201910553334A CN 110321824 B CN110321824 B CN 110321824B
Authority
CN
China
Prior art keywords
binding
points
neural network
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910553334.4A
Other languages
Chinese (zh)
Other versions
CN110321824A (en
Inventor
张伟民
郭子原
孙尧
梁震烁
黄强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haribit Intelligent Technology Co ltd
Original Assignee
Beijing Haribit Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haribit Intelligent Technology Co ltd filed Critical Beijing Haribit Intelligent Technology Co ltd
Priority to CN201910553334.4A priority Critical patent/CN110321824B/en
Publication of CN110321824A publication Critical patent/CN110321824A/en
Application granted granted Critical
Publication of CN110321824B publication Critical patent/CN110321824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a binding judgment method and device based on a neural network. The method comprises the steps that a binding robot acquires an image of a current area through a camera on a rack, wherein the current area is a steel bar binding area which can be identified by the binding robot in a current position; identifying the image of the current region based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished; if the binding is not finished, controlling the mechanical arm to align to the position of the unbound point and acquiring an image of the unbound point at a close distance through a camera on the mechanical arm; and identifying the images of the near-distance unbound points based on the neural network model, and verifying whether the unbound points are unbound. The problem that whether mode of ligature is wasted time and energy is confirmed through manual operation is solved in this application.

Description

Binding determination method and device based on neural network
Technical Field
The application relates to the technical field of robots, in particular to a binding determination method and device based on a neural network.
Background
In actual building construction, in order to perform better and safer construction in the later stage, the reinforcement structure needs to be bound in the initial construction stage. Specifically, the realization method for quickly binding the steel bars is to bind the steel bars through a robot. In order to realize comprehensive non-manual intervention in the binding process, the binding robot is required to be capable of automatically identifying binding points and determining whether the binding points are bound or not, and the situations of repeated binding and missed binding are avoided. At present, whether the binding points are bound or not is determined by manually searching, but the manual operation mode is time-consuming and labor-consuming, and the binding efficiency of the steel bars is influenced.
Disclosure of Invention
The main purpose of the application is to provide a binding judgment method based on a neural network, and the method is used for solving the problem that time and labor are wasted in a mode of determining whether binding points are bound through manual operation.
In order to achieve the above object, according to a first aspect of the present application, there is provided a neural network-based banding determination method.
The binding judgment method based on the neural network comprises the following steps:
the method comprises the following steps that a binding robot acquires an image of a current area through a camera on a rack, wherein the current area is a steel bar binding area which can be identified by the binding robot in a current position;
identifying the image of the current region based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished;
if the binding is not finished, controlling the mechanical arm to align to the position of the unbound point and acquiring an image of the unbound point at a close distance through a camera on the mechanical arm;
and identifying the images of the near-distance unbound points based on the neural network model, and verifying whether the unbound points are unbound.
Further, the recognizing the image of the current region based on the neural network model, determining the position of the binding point in the current region and judging whether the binding of the binding point is completed comprises:
segmenting an image of a current area to obtain a plurality of binding point images, wherein each binding point image corresponds to one binding point;
identifying each binding point image to determine whether the binding points are bound;
and if all the binding points are the bound points, determining that the binding points in the current area are bound.
Further, if all the binding points are already bound points, determining that the binding of the binding points in the current region is completed includes:
counting the number of bound points;
calculating the recognition rate, wherein the recognition rate is the ratio of the number of the banded points to the number of all banded points in the image of the current region;
and determining whether the binding points in the current area are bound or not according to the identification rate.
Further, the method further comprises:
acquiring training samples, wherein the training samples are steel bar binding area images with preset numbers and marked binding points and non-binding points;
and carrying out model training according to the training samples to obtain a neural network model.
Further, the neural network model is a convolutional neural network model, and the obtaining of the neural network model by performing model training according to the training samples further includes:
performing gain processing on the training samples;
a random inactivation Dropout method is adopted to randomly remove neurons in the training process;
and performing local normalization processing on the output of the linear rectification layer corresponding to the used linear rectification function.
Furthermore, the camera on the rack is a depth camera, and the camera on the mechanical arm is a high-resolution camera. In order to achieve the above object, according to a second aspect of the present application, there is provided a neural network-based banding determination apparatus.
The neural network-based banding determination device according to the present application includes:
the first acquisition unit is used for acquiring an image of a current area by the binding robot through a camera on the rack, wherein the current area is a reinforcement binding area which can be identified by the binding robot in the current position;
the first identification unit is used for identifying the image of the current region based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished;
the second acquisition unit is used for controlling the mechanical arm to align to the position of an unbounded point and acquiring an image of the unbounded point at a close distance through a camera on the mechanical arm if the binding is not finished;
and the second identification unit is used for identifying the images of the non-binding points at the close distance based on the neural network model and verifying whether the non-binding points are not bound.
Further, the first identification unit includes:
the segmentation module is used for segmenting the image of the current area to obtain a plurality of binding point images, and each binding point image corresponds to one binding point;
the identification module is used for identifying each binding point image to determine whether the binding points are bound;
and the determining module is used for determining that the binding points in the current area finish binding if all the binding points are the bound points.
Further, the determining module is configured to:
counting the number of bound points;
calculating the recognition rate, wherein the recognition rate is the ratio of the number of the banded points to the number of all banded points in the image of the current region;
and determining whether the binding points in the current area are bound or not according to the identification rate.
Further, the apparatus further comprises:
the third acquisition unit is used for acquiring training samples, wherein the training samples are steel bar binding area images with preset numbers and marked binding points and non-binding points;
and the training unit is used for carrying out model training according to the training samples to obtain the neural network model.
Further, the neural network model is a convolutional neural network model, and the training unit further includes:
the gain module is used for performing gain processing on the training samples;
the removing module is used for randomly removing the neurons in the training process by adopting a random inactivation Dropout method;
and the normalization processing module is used for performing local normalization processing on the output of the linear rectification layer corresponding to the used linear rectification function.
Furthermore, the camera on the rack is a depth camera, and the camera on the mechanical arm is a high-resolution camera.
In order to achieve the above object, according to a third aspect of the present application, there is provided an electronic apparatus comprising:
at least one processor;
and at least one memory, bus connected with the processor; wherein,
the processor and the memory complete mutual communication through the bus;
the processor is configured to invoke program instructions in the memory to perform the neural network-based ligature determination method of any one of the first aspects.
In order to achieve the above object, according to a fourth aspect of the present application, there is provided a non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium stores computer instructions that cause the computer to execute the neural network-based banding determination method of any one of the above first aspects.
In the embodiment of the application, in the binding judgment method and the binding judgment device based on the neural network, a binding robot can firstly acquire an image of a current region through a camera on a rack, identify the image based on a neural network model, preliminarily judge whether binding of binding points is finished and determine the positions of the binding points in the current region; if the binding is preliminarily determined to be unfinished, controlling the mechanical arm to align to the position of the unfinished point and acquiring an image of the near-distance unfinished point through a camera on the mechanical arm; and identifying the images of the near-distance unbound points based on the neural network model again, and verifying whether the unbound points are unbound. The whole process has no manual intervention, and time and labor are saved. In addition, the dual image recognition is firstly judged and verified again in the process of judging the binding state, so that the accurate judgment of the binding state of the binding point can be ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a flow chart of a neural network-based ligature determination method according to an embodiment of the present application;
FIG. 2 is a flow chart of another neural network-based ligature determination method provided according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a network structure of a convolutional neural network model provided in an embodiment of the present application;
FIG. 4 is a corresponding exemplary diagram for calculating the number of all ligature points in an image of a current region according to an embodiment of the present application;
FIG. 5 is a block diagram of a neural network-based ligature determining device according to an embodiment of the present application;
fig. 6 is a block diagram of another neural network-based banding determination apparatus provided according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a neural network-based ligature determination method, as shown in fig. 1, the method includes steps S101 to S104 as follows:
first, it should be noted that the application scenario of the present embodiment is to perform reinforcement binding.
S101, the binding robot acquires an image of a current area through a camera on the rack.
The current area is a steel bar binding area which can be identified by the binding robot in the current direction. Compare in the arm that can move about, the frame is fixed equipment, and the frame is compared in the arm and says far away from the ligature point of reinforcing bar. The image of the current region is acquired in order to judge the banding state of the banding points in the current region to further determine whether to perform the banding operation.
S102, identifying the image based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished.
The input of the neural network model is an image containing the binding points, and the output is an image marked with the bound points and the unbound points. The neural network model is trained in advance according to training samples, the training samples are the steel bar binding area images with preset quantity and marked binding points and non-binding points, therefore, the neural network model in the embodiment can be used for identifying the image of the current area and determining whether the binding points are bound or not. In addition, the binding points are identified and the corresponding positions of the binding points can be obtained, and the positions are based on the coordinate positions of the camera coordinate system on the rack. The binding point position is obtained for moving the mechanical arm to the non-binding point position according to the binding point position to verify the binding state of the non-binding point.
In addition, the judgment basis for judging whether the binding points finish binding is whether all the binding points in the image of the current area finish binding.
S103, if the binding is not finished, controlling the mechanical arm to align to the position of the unbound point and acquiring a close-range unbound point image through a camera on the mechanical arm.
If the binding points of the current area are not bound, indicating that there are no binding points. However, since the above-mentioned determination of the binding state of the binding point is performed based on the image obtained by the camera on the rack, and the camera on the rack is far away from the binding point, in order to ensure the accuracy of the determination, the embodiment verifies the binding point that has been determined to be unbound. The specific verification is that the images of the near-distance unbound points are acquired again through the camera at the tail end of the mechanical arm, and the judgment is carried out according to the near-distance unbound images. The specific judgment process is referred to in step S104.
And S104, identifying the images of the near-distance unbound points based on the neural network model, and verifying whether the unbound points are unbound.
In this step, the recognition of the near-distance image without the binding point based on the neural network model is the same as the implementation of the image recognition of the current region in step S102, and details are not repeated here. The step is further to verify the state of the unbound points. If the verification is passed, the binding robot is determined to be an unbound point, the mechanical arm is controlled by the binding robot to bind according to the position of the unbound point, and the specific position of the unbound point is determined according to the identification of the acquired image of the unbound point at the close distance. If the verification is not passed, other images without binding points are continuously identified until all binding points in the current area are determined to be in a bound state, and the binding robot continues to move forwards.
From the above description, it can be seen that, in the binding determination method based on the neural network in the embodiment of the present application, the binding robot can first acquire an image of a current region through a camera on the rack, identify the image based on the neural network model, preliminarily determine whether binding of a binding point is completed, and determine a position of the binding point in the current region; if the binding is preliminarily determined to be unfinished, controlling the mechanical arm to align to the position of the unfinished point and acquiring an image of the near-distance unfinished point through a camera on the mechanical arm; and identifying the images of the near-distance unbound points based on the neural network model again, and verifying whether the unbound points are unbound. The whole process has no manual intervention, and time and labor are saved. In addition, the dual image recognition is firstly judged and verified again in the process of judging the binding state, so that the accurate judgment of the binding state of the binding point can be ensured.
According to another embodiment of the present application, there is provided a neural network-based ligature determination method, as shown in fig. 2, the method including:
s201, the binding robot acquires an image of a current area through a camera on the rack.
The implementation of this step may refer to the implementation of step S101 in fig. 1, and it is additionally necessary to supplement that the camera on the rack that acquires the image of the current region is a depth camera, such as a depth camera of another type or another model, for example, realsense d 435. The depth camera is adopted to acquire more accurate position information of the binding points and ensure the accuracy of binding.
S202, identifying the image of the current area based on the convolutional neural network model, determining the position of a binding point in the current area and judging whether the binding of the binding point is finished.
The specific principle of identifying the image of the current area based on the convolutional neural network model is as follows: segmenting the image to obtain a plurality of binding point images, wherein each binding point image corresponds to one binding point; each of the tie point images is then identified to determine whether the tie point has been tied.
Preferably, the convolutional neural network in the present embodiment is an AlexNet neural network model, and in practical applications, other neural network models such as ResNet or VGGNet may be used.
The convolutional neural network model in this embodiment needs to be obtained by training in advance, and specifically, the description of training using the AlexNet neural network model as an example is as follows: 1) training samples are obtained. Acquiring a preset number of images containing steel bar binding points; and manually marking each image, marking the banded points and the unbounded points, and distinguishing the banded points from the unbounded points by using different colors or shapes. 2) And training the AlexNet neural network model based on the training samples. In the training process, in order to ensure the rapidity of the training neural network, a linear rectification function is used, and the output of a linear rectification layer is subjected to local response normalization processing, wherein the processing is based on the inspiration of the behavior of real neurons, and a transverse inhibition is formed. In order to prevent the adaptability of the banding recognition to different environments from being deteriorated due to overfitting, gain processing is carried out on the training samples, namely, images are turned over, and a random inactivation Dropout method is adopted in the training process of the neural network to randomly remove some neurons. In addition, fig. 3 is a schematic diagram of a network structure of a convolutional neural network model (AlexNet neural network model) provided in this embodiment.
And inputting the finally obtained AlexNet neural network model into an image containing the binding points, and outputting the image with the bound points and the unbound points marked. Therefore, the image is identified based on the convolutional neural network model, and the banded points and the non-banded points can be determined.
And determining whether the binding of the binding points in the current area is finished or not according to the determined bound points and the determined unbound points. The specific determination methods are two types:
the first determination method:
if all the binding points are already bound points, determining that the binding points in the current area are bound; and if not all the binding points are the bound points, determining that the binding points in the current area are not bound.
The second determination method:
1) counting the number of bound points; 2) calculating the recognition rate, wherein the recognition rate is the ratio of the number of the banded points to the number of all banded points in the image of the current region; 3) determining whether the binding points in the current area are bound according to the identification rate: the recognition rate is compared with a preset value. The preset value is a self-defined value, preferably, the preset value is set to be a value which is greater than or equal to 0.75 and smaller than 1 in the example; if the identification rate is smaller than the preset value, determining that the binding of the binding points in the current area is not finished; and if the identification rate is greater than or equal to the preset value, determining that the binding points in the current area complete binding.
The calculation process for the number of all the ligature points in the image of the current area is as follows: determining coordinates of boundary points in the image of the current area; and calculating the number of all binding points in the image according to the coordinates of the boundary points and the intervals of the steel bars. A specific example is given to illustrate the calculation process of the number of all the ligating points in the image: as shown in fig. 4, the coordinates of the boundary points in the image are (x0, y0) (x1, y1) (x2, y2) (x3, y3), respectively; therefore, the maximum value Nmax of the number of the identified ligature points in the image of the current region can be calculated as ((x1-x0)/dx +1) ((y2-y 0)/dy-1); wherein dx and dy represent the spacing of the actual rebars; the above-described maximum value is determined as the number of all the ligating points in the image of the current region in the present embodiment.
S203, if the binding is not finished, controlling the mechanical arm to align to the position of the unbound point and acquiring a close-range unbound point image through a camera on the mechanical arm.
The implementation of this step is the same as that of step S103 in fig. 1, and is not described here again. It is also necessary to supplement that the camera on the mechanical arm is a high-resolution camera, such as a high-resolution USB camera.
And S204, identifying the images of the non-binding points at the close distance based on the convolutional neural network model, and verifying whether the non-binding points are not bound.
The convolutional neural network model in this step is the same as the convolutional neural network model in step S202, and therefore, it is possible to determine whether or not an image of an unbound point in a short distance is unbound based on the recognition of the convolutional neural network model. The step is further to verify the state of the unbound points. If the verification is passed, the binding robot is determined to be an unbound point, the mechanical arm is controlled by the binding robot to bind according to the position of the unbound point, and the specific position of the unbound point is determined according to the identification of the acquired image of the unbound point at the close distance. If the verification is not passed, other images without binding points are continuously identified until all binding points in the current area are determined to be in a bound state, and the binding robot continues to move forwards.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a neural network-based ligature determining apparatus for implementing the method described in fig. 1 or fig. 2, as shown in fig. 5, the apparatus including:
the first obtaining unit 31 is used for obtaining an image of a current area by the binding robot through a camera on the rack, wherein the current area is a reinforcement binding area which can be identified by the binding robot in the current position;
the first identification unit 32 is configured to identify an image of a current region based on a neural network model, determine a position of a binding point in the current region, and determine whether the binding of the binding point is completed;
the second obtaining unit 33 is configured to, if the binding is not completed, control the mechanical arm to align to a position of an unbound point and obtain an image of the unbound point at a close distance through a camera on the mechanical arm;
and the second identification unit 34 is configured to identify the images of the unbound points in the near distance based on the neural network model, and verify whether the unbound points are unbound.
Specifically, the specific process of implementing the functions of each module in the apparatus in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
From the above description, it can be seen that, in the binding determination device based on the neural network in the embodiment of the present application, the binding robot can first acquire an image of a current region through a camera on the rack, recognize the image based on the neural network model, preliminarily determine whether binding of a binding point is completed, and determine a position of the binding point in the current region; if the binding is preliminarily determined to be unfinished, controlling the mechanical arm to align to the position of the unfinished point and acquiring an image of the near-distance unfinished point through a camera on the mechanical arm; and identifying the images of the near-distance unbound points based on the neural network model again, and verifying whether the unbound points are unbound. The whole process has no manual intervention, and time and labor are saved. In addition, the dual image recognition is firstly judged and verified again in the process of judging the binding state, so that the accurate judgment of the binding state of the binding point can be ensured.
Further, as shown in fig. 6, the first identifying unit 32 includes:
the segmentation module 321 is configured to segment an image of a current region to obtain a plurality of binding point images, where each binding point image corresponds to one binding point;
the identification module 322 is used for identifying each binding point image to determine whether the binding point is bound;
the determining module 323 is configured to determine that the banding of the banding point in the current area is completed if all the banding points are the banded points.
Further, as shown in fig. 6, the determining module 323 is configured to:
counting the number of bound points;
calculating the recognition rate, wherein the recognition rate is the ratio of the number of the banded points to the number of all banded points in the image of the current region;
and determining whether the binding points in the current area are bound or not according to the identification rate.
Further, as shown in fig. 6, the apparatus further includes:
a third obtaining unit 35, configured to obtain training samples, where the training samples are reinforcement binding area images in a preset number and marked with bound points and unbound points;
and the training unit 36 is configured to perform model training according to the training samples to obtain a neural network model.
Further, as shown in fig. 6, the neural network model is a convolutional neural network model, and the training unit 36 further includes:
a gain module 361, configured to perform gain processing on the training samples;
a removing module 362, configured to randomly remove neurons in the training process by using a random inactivation Dropout method;
and the normalization processing module 363 is configured to perform local normalization processing on the output of the linear rectification layer corresponding to the used linear rectification function.
Furthermore, the camera on the rack is a depth camera, and the camera on the mechanical arm is a high-resolution camera.
Specifically, the specific process of implementing the functions of each module in the apparatus in the embodiment of the present application may refer to the related description in the method embodiment, and is not described herein again.
According to an embodiment of the present application, there is also provided an electronic device, including:
at least one processor;
and at least one memory, bus connected with the processor; wherein,
the processor and the memory complete mutual communication through the bus;
the processor is configured to call program instructions in the memory to perform the neural network-based ligation determination method described in fig. 1 or fig. 2 above.
There is also provided, in accordance with an embodiment of the present application, a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the neural network-based banding determination method of fig. 1 or 2.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A binding determination method based on a neural network is characterized by comprising the following steps:
the method comprises the following steps that a binding robot acquires an image of a current area through a camera on a rack, wherein the current area is a steel bar binding area which can be identified by the binding robot in a current position;
identifying the image of the current region based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished;
if the binding is not finished, controlling the mechanical arm to align to the position of the unbound point and acquiring an image of the unbound point at a close distance through a camera on the mechanical arm;
identifying the images of the non-binding points at the close distance based on a neural network model, and verifying whether the non-binding points are not bound;
the neural network model is an AlexNet neural network model, and is obtained by training images containing steel bar binding points, wherein the images are used for marking bound points and unbound points, and distinguishing the bound points from the unbound points by using different colors or shapes; in the training process, firstly, gain processing is carried out on a training sample, namely, an image is turned over, and a random inactivation Dropout method is adopted in the training process of the neural network to randomly remove some neurons; in addition, a linear rectification function is used, and the output of the linear rectification layer is subjected to local response normalization processing, so that a transverse suppression is formed.
2. The neural network-based banding determination method as claimed in claim 1, wherein said identifying the image of the current region based on the neural network model, determining the position of the banding point in the current region and determining whether the banding point completes the banding comprises:
segmenting an image of a current area to obtain a plurality of binding point images, wherein each binding point image corresponds to one binding point;
identifying each binding point image to determine whether the binding points are bound;
and if all the binding points are the bound points, determining that the binding points in the current area are bound.
3. The neural network-based banding determination method as claimed in claim 2, wherein said determining that the banding of the banding point in the current region is completed if all the banding points are the banded points comprises:
counting the number of bound points;
calculating the recognition rate, wherein the recognition rate is the ratio of the number of the banded points to the number of all banded points in the image of the current region;
and determining whether the binding points in the current area are bound or not according to the identification rate.
4. The neural network-based ligature determination method according to claim 1, further comprising:
acquiring training samples, wherein the training samples are steel bar binding area images with preset numbers and marked binding points and non-binding points;
and carrying out model training according to the training samples to obtain a neural network model.
5. The neural network-based ligature determination method according to claim 4, wherein the neural network model is a convolutional neural network model, and the performing model training according to the training samples to obtain the neural network model further comprises:
performing gain processing on the training samples;
a random inactivation Dropout method is adopted to randomly remove neurons in the training process;
and performing local normalization processing on the output of the linear rectification layer corresponding to the used linear rectification function.
6. The neural network-based ligature determination method according to claim 1, wherein the camera on the rack is a depth camera and the camera on the mechanical arm is a high-resolution camera.
7. A neural network-based ligature determination apparatus, the apparatus comprising:
the first acquisition unit is used for acquiring an image of a current area by the binding robot through a camera on the rack, wherein the current area is a reinforcement binding area which can be identified by the binding robot in the current position;
the first identification unit is used for identifying the image of the current region based on the neural network model, determining the position of a binding point in the current region and judging whether the binding of the binding point is finished;
the second acquisition unit is used for controlling the mechanical arm to align to the position of an unbounded point and acquiring an image of the unbounded point at a close distance through a camera on the mechanical arm if the binding is not finished;
the second identification unit is used for identifying the images of the non-binding points at the close distance based on the neural network model and verifying whether the non-binding points are not bound;
the neural network model is an AlexNet neural network model, and is obtained by training images containing steel bar binding points, wherein the images are used for marking bound points and unbound points, and distinguishing the bound points from the unbound points by using different colors or shapes; in the training process, firstly, gain processing is carried out on a training sample, namely, an image is turned over, and a random inactivation Dropout method is adopted in the training process of the neural network to randomly remove some neurons; in addition, a linear rectification function is used, and the output of the linear rectification layer is subjected to local response normalization processing, so that a transverse suppression is formed.
8. The neural network-based ligature determining apparatus according to claim 7, wherein the first identifying unit includes:
the segmentation module is used for segmenting the image of the current area to obtain a plurality of binding point images, and each binding point image corresponds to one binding point;
the identification module is used for identifying each binding point image to determine whether the binding points are bound;
and the determining module is used for determining that the binding points in the current area finish binding if all the binding points are the bound points.
9. An electronic device, comprising:
at least one processor;
and at least one memory, bus connected with the processor; wherein,
the processor and the memory complete mutual communication through the bus;
the processor is configured to invoke program instructions in the memory to perform the neural network-based ligature determination method of any one of claims 1 to 6.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the neural network-based banding determination method of any one of claims 1-6.
CN201910553334.4A 2019-06-24 2019-06-24 Binding determination method and device based on neural network Active CN110321824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910553334.4A CN110321824B (en) 2019-06-24 2019-06-24 Binding determination method and device based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910553334.4A CN110321824B (en) 2019-06-24 2019-06-24 Binding determination method and device based on neural network

Publications (2)

Publication Number Publication Date
CN110321824A CN110321824A (en) 2019-10-11
CN110321824B true CN110321824B (en) 2021-10-19

Family

ID=68121231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910553334.4A Active CN110321824B (en) 2019-06-24 2019-06-24 Binding determination method and device based on neural network

Country Status (1)

Country Link
CN (1) CN110321824B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985338A (en) * 2020-07-22 2020-11-24 中建科技集团有限公司深圳分公司 Binding point identification method, device, terminal and medium
CN112627538B (en) * 2020-11-24 2021-10-29 武汉大学 Intelligent acceptance method for binding quality of steel mesh binding wires based on computer vision

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105064690A (en) * 2015-06-11 2015-11-18 淮南智辉装饰工程有限公司 Automatic rebar tying machine
JP2016130429A (en) * 2015-01-14 2016-07-21 株式会社大林組 Reinforcement binding method, preassembled reinforcement and reinforcement binding device
CN106096558A (en) * 2016-06-16 2016-11-09 长安大学 A kind of binding reinforcing bars method based on neutral net
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
CN108536933A (en) * 2018-03-26 2018-09-14 何登富 A kind of system and method for automatic arrangement reinforcing bar
CN109094845A (en) * 2018-09-05 2018-12-28 四川知创知识产权运营有限公司 A kind of reinforced-bar binding alignment means
CN109129488A (en) * 2018-09-27 2019-01-04 广东电网有限责任公司 A kind of high-altitude maintenance robot localization method and device based on near-earth overall Vision
CN109250186A (en) * 2018-09-05 2019-01-22 四川知创知识产权运营有限公司 A kind of reinforcing bar intelligent identifying system
CN109383865A (en) * 2018-11-08 2019-02-26 中民筑友科技投资有限公司 A kind of binding mechanism and reinforced mesh binding device
JP2019039174A (en) * 2017-08-23 2019-03-14 学校法人千葉工業大学 Self-traveling rebar operating robot and self-traveling rebar binding robot
CN109815950A (en) * 2018-12-28 2019-05-28 汕头大学 A kind of reinforcing bar end face recognition methods based on depth convolutional neural networks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017212068B4 (en) * 2017-07-14 2021-03-18 Airbus Defence and Space GmbH Fiber composite laying device and fiber composite laying method for the production of a fiber composite fabric for the formation of a fiber composite component
US10831519B2 (en) * 2017-11-22 2020-11-10 Amazon Technologies, Inc. Packaging and deploying algorithms for flexible machine learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016130429A (en) * 2015-01-14 2016-07-21 株式会社大林組 Reinforcement binding method, preassembled reinforcement and reinforcement binding device
CN105064690A (en) * 2015-06-11 2015-11-18 淮南智辉装饰工程有限公司 Automatic rebar tying machine
CN106096558A (en) * 2016-06-16 2016-11-09 长安大学 A kind of binding reinforcing bars method based on neutral net
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
JP2019039174A (en) * 2017-08-23 2019-03-14 学校法人千葉工業大学 Self-traveling rebar operating robot and self-traveling rebar binding robot
CN108536933A (en) * 2018-03-26 2018-09-14 何登富 A kind of system and method for automatic arrangement reinforcing bar
CN109094845A (en) * 2018-09-05 2018-12-28 四川知创知识产权运营有限公司 A kind of reinforced-bar binding alignment means
CN109250186A (en) * 2018-09-05 2019-01-22 四川知创知识产权运营有限公司 A kind of reinforcing bar intelligent identifying system
CN109129488A (en) * 2018-09-27 2019-01-04 广东电网有限责任公司 A kind of high-altitude maintenance robot localization method and device based on near-earth overall Vision
CN109383865A (en) * 2018-11-08 2019-02-26 中民筑友科技投资有限公司 A kind of binding mechanism and reinforced mesh binding device
CN109815950A (en) * 2018-12-28 2019-05-28 汕头大学 A kind of reinforcing bar end face recognition methods based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN110321824A (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN110328662B (en) Path planning method and device based on image recognition
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
CN110321824B (en) Binding determination method and device based on neural network
CN109002820A (en) A kind of licence plate recognition method, device and relevant device
CN111208798A (en) Robot testing method and device, electronic equipment and storage medium
CN111079533B (en) Unmanned vehicle driving decision method, unmanned vehicle driving decision device and unmanned vehicle
CN110222629A (en) Bale No. recognition methods and Bale No. identifying system under a kind of steel scene
CN112147994B (en) Robot and recharging control method and device thereof
CN113029167B (en) Map data processing method, map data processing device and robot
CN109746405A (en) Slab roller-way autocontrol method, device, terminal device and storage medium
CN111338938A (en) Slider verification method and device based on dynamic path
CN113128168A (en) Pad parameter checking and correcting method and device, computer equipment and storage medium
CN109073398B (en) Map establishing method, positioning method, device, terminal and storage medium
CN113932825A (en) Robot navigation path width acquisition system, method, robot and storage medium
CN113593892A (en) Design method of transformer winding track
CN110827393B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106303153B (en) A kind of image processing method and device
CN113593297B (en) Parking space state detection method and device
CN115129297B (en) Multi-point multiplication operation system, method, graphic processor, electronic device and equipment
CN111060110A (en) Robot navigation method, robot navigation device and robot
CN115760568A (en) Target image generation method and device and electronic equipment
CN110969674B (en) Method and device for generating winding drawing, terminal equipment and readable storage medium
CN109636877B (en) Lane line adjustment processing method and device and electronic equipment
CN109621405B (en) Cross-platform interaction method and device, computer equipment and storage medium
CN110517208B (en) Coordinate system association method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant