CN114140607A - Machine vision positioning method and system for upper arm prosthesis control - Google Patents

Machine vision positioning method and system for upper arm prosthesis control Download PDF

Info

Publication number
CN114140607A
CN114140607A CN202111462738.6A CN202111462738A CN114140607A CN 114140607 A CN114140607 A CN 114140607A CN 202111462738 A CN202111462738 A CN 202111462738A CN 114140607 A CN114140607 A CN 114140607A
Authority
CN
China
Prior art keywords
image
target
gradient
upper arm
machine vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111462738.6A
Other languages
Chinese (zh)
Inventor
李智军
孟庆昇
李国欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202111462738.6A priority Critical patent/CN114140607A/en
Publication of CN114140607A publication Critical patent/CN114140607A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a machine vision positioning method and a system for upper arm prosthesis control, which comprise the following steps: step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera; step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target. The invention starts from a muscle cooperation basic theory, constructs a cooperation activation model and an upper limb multi-joint synchronous proportional myoelectric control system, realizes the synchronous continuous motion control of the artificial limb with multiple degrees of freedom, and realizes the flexible naturalness of the artificial limb motion and the convenient and high-efficiency use.

Description

Machine vision positioning method and system for upper arm prosthesis control
Technical Field
The invention relates to the technical field of vision positioning, in particular to a machine vision positioning method and system for upper arm prosthesis control.
Background
The robot vision system, also called robot eye system, means to integrate the vision sensor information into the robot control system. According to the difference of the installation positions of the cameras, the hand-eye relationship of the system can be divided into two types: one is Eye-in-Hand, the camera is arranged at the tail end of the mechanical arm and is not fixed relative to the environment; another is Eye-to-Hand, where the camera is mounted in a fixed position, usually above the robot arm.
The Eye-in-Hand scheme has low requirement on the calibration precision of the monocular camera, but cannot ensure that a target body is always in the visual field, so that the complete environmental information can be obtained by the aid of an additional camera; the camera in the Eye-to-Hand solution can acquire information of the entire environment, but additional target placement or planning requirements may be added due to the tendency of the robotic arm movement to block the target area.
There are many methods for object recognition, such as deep learning, which is a popular method in recent years, but it not only requires a large number of samples, but also has a long learning time.
Patent document CN103271784B (application number: CN201310223530.8) discloses a binocular vision-based human-computer interactive manipulator control system and method, which is composed of the following four parts: the device comprises a real-time image acquisition device, a laser guide device, a programmable controller and a driving device; the programmable controller consists of a binocular stereo vision module, a three-dimensional coordinate system transformation module, a reverse inverse solution manipulator joint angle module and a control module. The color characteristics in binocular images of the real-time image acquisition device are extracted to be used as a signal source for controlling the manipulator, and three-dimensional information of red characteristic laser points in the visual field real-time images is obtained through conversion calculation of a binocular stereoscopic vision system and a three-dimensional coordinate system, so that the manipulator is controlled to carry out human-computer interactive target tracking operation.
Disclosure of Invention
In view of the deficiencies in the prior art, it is an object of the present invention to provide a machine vision positioning method and system for upper arm prosthesis control.
The invention provides a machine vision positioning method for upper arm prosthesis control, which comprises the following steps:
step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the step 1 comprises:
step 1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
step 1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the step 2 includes:
step 2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
step 2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
step 2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
step 2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
Figure BDA0003388973820000021
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
Figure BDA0003388973820000022
θ=arctan(Gy/Gx)
Figure BDA0003388973820000031
wherein S isxRepresenting the x directionThe Sobel operator is used for detecting the edge in the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
Figure BDA0003388973820000032
Figure BDA0003388973820000033
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
According to the present invention there is provided a machine vision positioning system for upper arm prosthesis control comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the module M2 includes:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
Figure BDA0003388973820000041
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
Figure BDA0003388973820000042
θ=arctan(Gy/Gx)
Figure BDA0003388973820000043
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
Figure BDA0003388973820000044
Figure BDA0003388973820000045
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
Compared with the prior art, the invention has the following beneficial effects:
the invention combines two schemes of Eye-in-Hand and Eye-to-Hand, selects an Adaboost cascade classifier to detect a target sample, and constructs a cooperative activation model and an upper limb multi-joint synchronous proportion myoelectric control system from a muscle cooperation basic theory to realize the synchronous continuous motion control of multiple degrees of freedom of the artificial limb, and simultaneously facilitates the human-computer interaction between an amputation patient and the artificial limb and familiarizes the function and action mode of the artificial limb as soon as possible.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flow chart of the operation of the present invention;
fig. 2 is a schematic external view of the present invention, wherein 1 is an Eye-to-Hand machine vision positioning camera, and 2 is an Eye-in-Hand machine vision positioning camera on an upper arm prosthesis.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
referring to fig. 1 and fig. 2, a machine vision positioning system firstly adopts Adaboost cascade classification through an Eye-to-Hand camera of a head, and is mainly characterized by the following steps: firstly, carrying out sample processing on different types of samples, then training a classifier by adopting an Adaboost cascade classification method based on 3 different feature types of Haar features, LBP features and HOG features, and testing the classifier by experiments under a single background and a complex background; and finally, selecting the classifier which is most suitable for detecting the feature type of the target sample through comparative analysis. In the process, a user can control the artificial limb in a large range through the man-machine interaction system.
Then, acquiring a planar two-dimensional posture of the target through an Eye-in-Hand camera on the artificial limb, and mainly comprising the following steps of: and identifying the target by using the trained classifier, and taking the identified target area as the area of interest. After the region of interest is obtained, image information is preprocessed. Firstly, using a trained color name conversion matrix as a first processing step to realize illumination robustness; secondly, carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously being well segmented from other objects; and finally, aiming at the characteristic that the occupied area of the target object in the region of interest is the largest, marking pixel points to obtain the maximum connected domain threshold value, carrying out binarization segmentation by using the threshold value, and finally carrying out morphological filtering to obtain the region only containing the target object. And then, acquiring target boundary information by adopting a Canny edge detection method, and then calculating the 2D posture. The first step, the influence of noise on the edge detection result is reduced as much as possible through Gaussian smoothing filtering, and the step smoothes the image and reduces the obvious noise influence. The following equation is a generation equation of each element value of the gaussian filter template:
Figure BDA0003388973820000061
wherein, sigma represents the standard deviation of Gaussian, and the value of sigma determines the variation amplitude of the Gaussian function and corresponds to the weight of the filter; k +1 is the window template size. And secondly, calculating the intensity gradient of the image, and returning a first derivative value in the horizontal Gx and vertical Gy directions by using a Sobel operator, so that the gradient G and the direction theta of the pixel point can be determined by applying the technology.
Figure BDA0003388973820000062
θ=arctan(Gy/Gx)
The formula is a Sobel operator relation in the x and y directions, wherein Sx represents a Sobel operator in the x direction and is used for detecting edges in the y direction; sy represents a Sobel operator in the y direction for detecting an edge in the x direction (the edge direction is perpendicular to the gradient direction):
Figure BDA0003388973820000063
if a window of 3 × 3 pixels in the image is a, and a pixel point e of the gradient is to be calculated, the gradient values of e in the x and y directions are Gx and Gy, respectively, and the gradient values are obtained by performing convolution on e and a Sobel operator:
Figure BDA0003388973820000064
Figure BDA0003388973820000065
and thirdly, applying non-maximum suppression to eliminate spurious response caused by edge detection. And fourthly, hysteresis threshold. After non-maxima suppression, the remaining pixels may represent actual edges in the image, but some edge pixels still exist. Two thresholds Tmin and Tmax need to be set at this time. If the gray gradient of the image is higher than Tmax, the image is regarded as a true boundary; discard if less than Tmin; if the point is between the two points, whether the point is connected with the determined real boundary point needs to be judged, if the point is connected with the determined real boundary point, the point is a boundary, otherwise, the point is discarded. Edge information point sets with different density degrees can be obtained by adjusting the high and low threshold values of the lag threshold value, and the edge information can comprise a strong edge for describing the outline of the object and a weak edge for describing the illumination information in a proper amount by selecting a proper threshold value, so that the distribution of the points can reflect the posture of the object, and the artificial limb is controlled to grab the target object.
According to the present invention there is provided a machine vision positioning system for upper arm prosthesis control comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
Preferably, the module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
Preferably, the module M2 includes:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
Preferably, target boundary information is obtained by adopting a Canny edge detection method, and then the 2D posture is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
Figure BDA0003388973820000071
Figure BDA0003388973820000081
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
Figure BDA0003388973820000082
θ=arctan(Gy/Gx)
Figure BDA0003388973820000083
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
Figure BDA0003388973820000084
Figure BDA0003388973820000085
wherein: a to i denote the values of the elements of the A window.
Preferably, non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A machine vision positioning method for upper arm prosthesis control, comprising:
step 1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
step 2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
2. The machine vision positioning method for upper arm prosthesis control according to claim 1, wherein said step 1 comprises:
step 1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
step 1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
3. The machine vision positioning method for upper arm prosthesis control according to claim 2, characterized in that said step 2 includes:
step 2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
step 2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
step 2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
step 2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
4. The machine vision positioning method for upper arm prosthesis control according to claim 1, characterized in that a Canny edge detection method is adopted to obtain target boundary information, and then a 2D pose is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
Figure FDA0003388973810000011
wherein HijThe element value of the ith row and the jth column; sigma represents a Gaussian standard deviation, and the value of the sigma determines the variation amplitude of a Gaussian function and corresponds to the weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
Figure FDA0003388973810000021
θ=arctan(Gy/Gx)
Figure FDA0003388973810000022
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyGo through e and Sobel operatorsThe row convolution yields the formula:
Figure FDA0003388973810000023
Figure FDA0003388973810000024
wherein: a to i denote the values of the elements of the A window.
5. The machine vision localization method for upper arm prosthesis control of claim 1, wherein non-maxima suppression is applied to eliminate spurious responses due to edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
6. A machine vision positioning system for upper arm prosthesis control, comprising:
module M1: recognizing a target object and completing target positioning by adopting a visual algorithm matched with Adaboost cascade classification and a depth image through a head camera;
module M2: the illumination interference is reduced by a camera on the artificial limb through a color name conversion matrix trained by the CNN, the plane two-dimensional attitude of the target is obtained by using a Canny edge detection algorithm, and the artificial limb is further controlled to grab the target.
7. The machine vision positioning system for upper arm prosthesis control of claim 6, wherein said module M1 includes:
module M1.1: based on Haar characteristics, LBP characteristics and HOG characteristics, an Adaboost cascade classification method is adopted for classifier training;
module M1.2: and testing the classifier through experiments under a single background and a complex background, and selecting the classifier which accords with the preset condition and is used for detecting the characteristic type of the target sample through comparative analysis.
8. The machine vision positioning system for upper arm prosthesis control of claim 7, wherein said module M2 includes therein:
module M2.1: identifying the target through the trained classifier, and taking the identified target area as an interested area;
module M2.2: after the region of interest is obtained, preprocessing image information, and using a trained color name conversion matrix as a first processing step to realize illumination robustness;
module M2.3: carrying out regional information aggregation by using image segmentation based on a graph, ensuring that the information of a target object is complete and simultaneously segmenting the target object from other objects;
module M2.4: and obtaining a maximum connected domain threshold value by adopting the marked pixel points, carrying out binarization segmentation by utilizing the threshold value, and finally carrying out morphological filtering to obtain a region only containing the target object.
9. The machine vision positioning system for upper arm prosthesis control according to claim 6, wherein a Canny edge detection method is adopted to obtain target boundary information, and then a 2D pose is calculated;
reducing the influence of noise on the edge detection result through Gaussian smoothing filtering, wherein the generation equation of each element value of the Gaussian filter template is as follows:
Figure FDA0003388973810000031
wherein HijThe element value of the ith row and the jth column; sigma represents the standard deviation of Gaussian, and the value determines the variation amplitude of the Gaussian function, corresponding toThe weight of the filter; k +1 is the size of the window template;
calculating the intensity gradient of the image, and returning a first derivative value in the direction of horizontal Gx and vertical Gy by using a Sobel operator, thereby determining the gradient G and the direction theta of the pixel point by applying the determination technology, wherein the formula is as follows:
Figure FDA0003388973810000032
θ=arctan(Gy/Gx)
Figure FDA0003388973810000033
wherein S isxThe Sobel operator represents the x direction and is used for detecting the edge of the y direction; syThe Sobel operator represents the y direction and is used for detecting the edge of the x direction, and the edge direction is vertical to the gradient direction;
if the window of 3 × 3 pixels in the image is a and the gradient of the pixel point e is to be calculated, the gradient values of e in the x and y directions are G respectivelyxAnd GyAnd e and a Sobel operator are convoluted to obtain the product, and the formula is as follows:
Figure FDA0003388973810000041
Figure FDA0003388973810000042
wherein: a to i denote the values of the elements of the A window.
10. The machine vision positioning system for upper arm prosthesis control of claim 6, wherein non-maximum suppression is applied to eliminate spurious responses from edge detection;
after the non-maximum value is suppressed, the remaining pixels represent the actual edge in the image, two hysteresis thresholds Tmin and Tmax are set, and if the gray gradient of the image is higher than Tmax, the image is judged to be a true boundary; discarding if the gray scale gradient of the image is below Tmin; if the gray gradient of the image is between Tmin and Tmax, judging whether the point is connected with the determined real boundary point, if so, determining the point is a boundary, otherwise, discarding the point.
CN202111462738.6A 2021-12-02 2021-12-02 Machine vision positioning method and system for upper arm prosthesis control Pending CN114140607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111462738.6A CN114140607A (en) 2021-12-02 2021-12-02 Machine vision positioning method and system for upper arm prosthesis control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111462738.6A CN114140607A (en) 2021-12-02 2021-12-02 Machine vision positioning method and system for upper arm prosthesis control

Publications (1)

Publication Number Publication Date
CN114140607A true CN114140607A (en) 2022-03-04

Family

ID=80387378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111462738.6A Pending CN114140607A (en) 2021-12-02 2021-12-02 Machine vision positioning method and system for upper arm prosthesis control

Country Status (1)

Country Link
CN (1) CN114140607A (en)

Similar Documents

Publication Publication Date Title
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
US11694432B2 (en) System and method for augmenting a visual output from a robotic device
CN110246127A (en) Workpiece identification and localization method and system, sorting system based on depth camera
CN108942923A (en) A kind of mechanical arm crawl control method
Eppner et al. Grasping unknown objects by exploiting shape adaptability and environmental constraints
JP2018514036A (en) Machine vision with dimensional data reduction
CN109048918B (en) Visual guide method for wheelchair mechanical arm robot
CN105159452A (en) Control method and system based on estimation of human face posture
CN109079777B (en) Manipulator hand-eye coordination operation system
Shi et al. Fuzzy dynamic obstacle avoidance algorithm for basketball robot based on multi-sensor data fusion technology
Pauli Learning to recognize and grasp objects
Pasarica et al. Remote control of a robotic platform based on hand gesture recognition
CN115797397B (en) Method and system for all-weather autonomous following of robot by target personnel
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN114140607A (en) Machine vision positioning method and system for upper arm prosthesis control
Welke et al. Active multi-view object search on a humanoid head
Zhou et al. Visual servo control system of 2-DOF parallel robot
Kühn et al. Multimodal saliency-based attention: A lazy robot's approach
Hema et al. An intelligent vision system for object localization and obstacle avoidance for an indoor service robot
Wang et al. Research and Design of Human Behavior Recognition Method in Industrial Production Based on Depth Image
Hüser et al. Visual programming by demonstration of grasping skills in the context of a mobile service robot using 1D-topology based self-organizing-maps
Zoghlami et al. Tracking body motions in order to guide a robot using the time of flight technology.
CN118163115B (en) Robot control method based on SSVEP-MI and face key point detection fusion
Tang et al. Design of Table Tennis Picking and Serving Robot Based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination