CN115972202A - Method, robot, device, medium and product for controlling operation of a robot arm - Google Patents

Method, robot, device, medium and product for controlling operation of a robot arm Download PDF

Info

Publication number
CN115972202A
CN115972202A CN202211582270.9A CN202211582270A CN115972202A CN 115972202 A CN115972202 A CN 115972202A CN 202211582270 A CN202211582270 A CN 202211582270A CN 115972202 A CN115972202 A CN 115972202A
Authority
CN
China
Prior art keywords
target
target object
point
contour
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211582270.9A
Other languages
Chinese (zh)
Inventor
纪尧姆·克莱贝
嵇超
毛崇兆
卢策吾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feixi Technology Co ltd
Flexiv Robotics Ltd
Original Assignee
Feixi Technology Co ltd
Flexiv Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feixi Technology Co ltd, Flexiv Robotics Ltd filed Critical Feixi Technology Co ltd
Priority to CN202211582270.9A priority Critical patent/CN115972202A/en
Publication of CN115972202A publication Critical patent/CN115972202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a method for controlling the operation of a mechanical arm, a robot, computer equipment, a storage medium and a computer program product, wherein a depth image of a target object in a preset posture is obtained, a target three-dimensional model of the target object is obtained according to the depth image, a contour central line of the target object is further determined according to characteristic points and the depth image in the target three-dimensional model, a region to be operated of the target object is determined based on the contour central line, then the mechanical arm is controlled to carry out rigidity test on the region to be operated, the target region is screened from the region to be operated according to the rigidity test result, and therefore the mechanical arm is controlled to carry out appointed operation on the target region of the target object, the target region required to be operated by the target object can be accurately obtained, and accurate operation on a specific part is achieved.

Description

Method, robot, device, medium and product for controlling operation of a robot arm
Technical Field
The application relates to the technical field of robot vision and mechanical arm accurate force control, in particular to a method, a robot, computer equipment, a medium and a product for controlling mechanical arm operation.
Background
With the continuous development of modern science and technology, robots are increasingly used to replace manual operation, for example, mechanical arms are used to massage human bodies, and massage is performed on regions where acupuncture points or muscles are located in a specific manner repeatedly.
Conventionally, when the robot arm is controlled to operate, the robot arm is usually operated manually (for example, by a masseur for manual massage), visual and force sense data generated during manual operation are recorded and processed by the robot arm, and then the robot arm performs simulation based on the data.
However, this method still requires manual pre-operation to obtain a fixed operation route, so that the mechanical arm cannot accurately perform the designated operation on some areas, such as massage on some specific acupuncture points or muscles.
Disclosure of Invention
In view of the above, there is a need to provide a method, a robot, a computer device, a medium and a product capable of accurately controlling the operation of a robot arm.
In a first aspect, the present application provides a method of controlling operation of a robotic arm, the method comprising:
acquiring a depth image of a target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image;
acquiring a plurality of target feature points in a target three-dimensional model, and determining a contour central line of a target object according to the plurality of target feature points and the depth image;
determining a region to be operated of the target object based on the contour center line;
controlling the mechanical arm to carry out rigidity test on the area to be operated, and screening the area to be operated according to a rigidity test result to obtain a target area;
and controlling the mechanical arm to perform a specified operation on the target area of the target object.
In one embodiment, the step of obtaining a target three-dimensional model of the target object from the depth image comprises: acquiring a standard three-dimensional model;
obtaining model parameters of a three-dimensional model of the target object according to the depth image;
acquiring a three-dimensional coordinate corresponding to the target object based on the model parameters;
and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain the three-dimensional model of the target object.
In one embodiment, the step of determining the contour centerline of the target object from the plurality of target feature points and the depth image comprises:
carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object;
and projecting the plurality of target characteristic points to an area where the two-dimensional plane contour is located, and obtaining a contour central line of the target object based on the projected target characteristic points.
In one embodiment, the target feature points are feature points on a central trunk of the target three-dimensional model, and the target feature points comprise a first feature point, a second feature point and a third feature point; the step of obtaining the contour center line of the target object based on the projected target feature points comprises the following steps:
uniformly inserting a first number of feature points between the projected first feature point and the projected second feature point;
uniformly inserting a second number of feature points between the projected second feature point and the projected third feature point;
and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
In one embodiment, the step of determining the region to be operated on of the target object based on the contour center line comprises:
acquiring a starting point and an end point of a target line segment of the contour central line;
respectively acquiring a first straight line which passes through a starting point and is perpendicular to a target line segment, and a second straight line which passes through an end point and is perpendicular to the target line segment;
determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour;
taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex;
determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour;
taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex;
and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
In one embodiment, the step of controlling the mechanical arm to perform rigidity test on the area to be operated, and screening the area to be operated according to the rigidity test result to obtain the target area comprises the following steps:
dividing the area to be operated according to a preset standard to obtain a plurality of grid areas;
controlling a mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values;
and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
In a second aspect, the present application also provides an apparatus for controlling the operation of a robotic arm, the apparatus comprising:
the three-dimensional model acquisition module is used for acquiring a depth image of the target object in a preset posture and acquiring a target three-dimensional model of the target object according to the depth image;
the contour center line determining module is used for acquiring a plurality of target feature points in the target three-dimensional model and determining a contour center line of the target object according to the plurality of target feature points and the depth image;
the to-be-operated area determining module is used for determining an to-be-operated area of the target object based on the contour central line;
the target area screening module is used for testing the rigidity of the area to be operated and screening the area to be operated according to the rigidity test result to obtain a target area;
and the mechanical arm operation module is used for controlling the mechanical arm to perform appointed operation on the target area of the target object.
In a third aspect, the present application further provides a robot, the computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the method steps of any one of the first aspect when executing the computer program.
In a fourth aspect, the present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the method steps of any one of the first aspect when executing the computer program.
In a fifth aspect, the present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method steps of any one of the first aspects.
In a sixth aspect, the present application also provides a computer program product comprising a computer program that, when executed by a processor, performs the method steps of any one of the first aspect.
According to the method, the device, the robot, the computer equipment, the storage medium and the computer program product for controlling the operation of the mechanical arm, the depth image of the target object in the preset posture is obtained, the target three-dimensional model of the target object is obtained according to the depth image, the contour central line of the target object is further determined according to the feature points and the depth image in the target three-dimensional model, the region to be operated of the target object is determined based on the contour central line, then the mechanical arm is controlled to carry out rigidity test on the region to be operated, the target region is screened from the region to be operated according to the rigidity test result, and therefore the mechanical arm is controlled to carry out appointed operation on the target region of the target object, the target region required to be operated by the target object can be accurately obtained, and accurate operation on the specific part is achieved.
Drawings
FIG. 1 is a diagram of an exemplary application of a method for controlling operation of a robotic arm;
FIG. 2 is a schematic flow chart diagram of a method for controlling operation of a robotic arm in one embodiment;
FIG. 3 is a schematic flow chart diagram illustrating the steps for obtaining a three-dimensional model of an object in one embodiment;
FIG. 4 is a schematic diagram of a standard three-dimensional model in one embodiment;
FIG. 5 is a graphical illustration of a comparison of depth information to a three-dimensional model in one embodiment;
FIG. 6 is a schematic diagram of the structure of the contour centerline in one embodiment;
FIG. 7 is a schematic diagram of a process for obtaining a target region in one embodiment;
FIG. 8 is a schematic flow chart illustrating an AI vision and robot force controlled massage method according to an embodiment;
FIG. 9 is a schematic flow chart illustrating optimization of a three-dimensional model of a human body in one embodiment;
FIG. 10 is a block diagram of an apparatus for controlling a robotic arm according to one embodiment;
FIG. 11 is a block diagram showing an internal structure of a robot according to an embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for controlling the operation of the mechanical arm provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the computer device 102 communicates with the robot 104 over a network. Wherein the computer device 102 and the robot 104 may be used individually or in conjunction to perform the method of controlling the operation of a robotic arm of the present application. Taking an example that the computer device and the robot are respectively and independently executed, the computer device 102 or the robot 104 is configured to acquire a depth image of a target object in a preset posture, acquire a target three-dimensional model of the target object according to the depth image, acquire a plurality of target feature points in the target three-dimensional model, determine a contour center line of the target object according to the plurality of target feature points and the depth image, determine a region to be operated of the target object based on the contour center line, and control the mechanical arm to perform a stiffness test on the region to be operated. The computer device 102 or the robot 104 is further configured to screen a target area from the area to be operated according to the rigidity test result, and control the mechanical arm to perform a specified operation on the target area of the target object. The computer device 102 may be, but is not limited to, various personal computers, laptops, tablets, and various robots, among others.
In one embodiment, as shown in fig. 2, a method for controlling the operation of a robotic arm is provided, which is illustrated as the method applied to the computer device 102 in fig. 1, wherein the computer device 102 may be a terminal, the method comprising the steps of:
s202: and acquiring a depth image of the target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image.
The preset postures can include but are not limited to a prone posture, a lateral posture and the like, for example, when a mechanical arm is controlled to massage the back of a human body, a depth image (RGBD) containing a target object in a visual field under the prone posture is acquired through a depth camera, a terminal acquires the acquired depth image from the depth camera, parameters of a three-dimensional model of the human body are obtained by adopting deep learning, the parameters represent three-dimensional coordinates of each point in the human body, and the terminal adjusts the standard three-dimensional model based on the parameters of the three-dimensional model of the human body acquired through the depth image to obtain the target three-dimensional model of the target object.
The depth image is an image in which the distance (depth) from the image acquisition device to each point in the scene is taken as a pixel value, and can reflect the geometric shape of the visible surface of the scene. The standard three-dimensional model refers to a human body three-dimensional model under a camera coordinate system obtained according to a human body grid recovery algorithm, the camera coordinate system is an optical axis image coordinate system with the coordinate origin as the optical center position of the camera, the X axis and the Y axis are respectively parallel to the X axis and the Y axis of the image coordinate system, and the Z axis is the camera, and the terminal adjusts points in the standard three-dimensional model according to the three-dimensional coordinates of each point in the human body to obtain a target three-dimensional model of a target object.
S204: and acquiring a plurality of target characteristic points in the target three-dimensional model, and determining the contour central line of the target object according to the plurality of target characteristic points and the depth image.
The method comprises the steps of taking a control mechanical arm as an example for massaging the back of a human body, wherein a plurality of target characteristic points in a target three-dimensional model are all points on a target object back spine line, a terminal divides a depth image by adopting a division algorithm to obtain a two-dimensional outline range of the target object, namely a two-dimensional plane back image of the target object, then a plurality of characteristic points in the three-dimensional model are projected into the two-dimensional outline range of the target object to obtain characteristic points on the back spine line in a two-dimensional plane of the target object, a plurality of characteristic points are uniformly inserted among the characteristic points by the terminal to obtain all characteristic points on the two-dimensional spine line of the target object, the characteristic points are connected, and the obtained connecting line is represented as the two-dimensional spine line of the target object, namely a contour central line of the target object.
S206: and determining the area to be operated of the target object based on the contour central line.
The contour central line and the two-dimensional contour range can represent all regions of the target object on the two-dimensional plane, the region to be operated can be represented by one part of the contour central line, for the back of a human body, the massage is usually focused near the waist at the lower end of the back, so that the terminal acquires the end part of the contour central line, and the back contour range corresponding to the end part is acquired according to the contour central line of the end part and serves as the belt operation region of the target object. In practical application, the region to be operated can also be a contour range of the upper end of the back, and the terminal can acquire any part on the contour center line according to practical application requirements, so that the region to be operated is obtained.
S208: and controlling the mechanical arm to perform rigidity test on the area to be operated, and screening the area to be operated according to the rigidity test result to obtain a target area.
The rigidity refers to the capability of resisting elastic deformation of a material or a structure when the material or the structure is stressed, after the terminal determines the region to be operated of the target object, the terminal sends region information to the mechanical arm, and the mechanical arm is controlled to carry out rigidity test on the corresponding region to be operated on the back of the target object. The mechanical arm presses different point positions of a to-be-operated area on the back of the target object with fixed force, and the reaction force received by the tail end of the mechanical arm is sent to the terminal to be stored, and the reaction force is the rigidity. The rigidity of different areas of the back of the human body is different, for example, the position of a skeleton is the position with the highest rigidity, the terminal performs screening according to stored rigidity data, firstly, data with overlarge rigidity can be eliminated, corresponding points are eliminated, and a target area is obtained by screening from an area to be operated.
S210: and controlling the mechanical arm to perform a specified operation on the target area of the target object.
The terminal sends the target area information to the mechanical arm, and the mechanical arm is controlled to operate the target area in a specified operation mode, for example, massage is performed on the target area on the back of the target object according to certain force.
According to the method for controlling the operation of the mechanical arm, the depth image of the target object in the preset posture is obtained, the target three-dimensional model of the target object is obtained according to the depth image, the contour central line of the target object is determined according to the feature points and the depth image in the target three-dimensional model, the region to be operated of the target object is determined based on the contour central line, then the mechanical arm is controlled to perform rigidity test on the region to be operated, the target region is screened from the region to be operated according to the rigidity test result, the mechanical arm is controlled to perform appointed operation on the target region of the target object, which needs to be operated, can be accurately obtained, and accurate operation on a specific part is achieved.
In one embodiment, as shown in fig. 3, the step of obtaining a target three-dimensional model of the target object from the depth image includes:
s302: and acquiring a standard three-dimensional model.
The standard three-dimensional Model refers to a human body three-dimensional Model in a camera coordinate system obtained according to a human body mesh recovery algorithm, and may also be referred to as a human body parameterized Model (SMPL), where a human body may be understood as a basic Model and a sum of deformations performed on the basis of the basic Model, and a Principal Component Analysis (PCA) is performed on the basis of the deformations to obtain a shape parameter (shape) which is a low-dimensional parameter describing a shape, and meanwhile, a motion tree is used to represent a posture of the human body, that is, a rotational relationship between each joint point of the motion tree and a parent node, where the relationship may be represented as a three-dimensional vector, and finally, a local rotational vector of each joint point constitutes a posture parameter (pose) of the Model. Specifically, as shown in fig. 4, fig. 4 is an SMPL human body model, i.e., a standard three-dimensional model.
S304: and obtaining model parameters of the three-dimensional model of the target object according to the depth image.
After the terminal obtains the depth image, parameters of the human body three-dimensional model are obtained through deep learning.
S306: and acquiring the three-dimensional coordinates corresponding to the target object based on the model parameters.
The parameters represent three-dimensional coordinates of each point in the human body, namely three-dimensional coordinates corresponding to the target object.
S308: and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain the three-dimensional model of the target object.
As shown in fig. 5, the black point in the drawing represents a standard three-dimensional model, the black entity is a human body model obtained based on an image and an actual scene, it can be seen that the posture and the position of the target object are different in the standard three-dimensional model compared with the human body model in the actual three-dimensional scene obtained based on a depth image, and the terminal adjusts the standard three-dimensional model based on the three-dimensional coordinates obtained based on the depth image to obtain the three-dimensional model of the target object.
In this embodiment, the three-dimensional model of the target object can be accurately obtained by obtaining the standard three-dimensional model, obtaining the model parameters of the three-dimensional model of the target object according to the depth image, then obtaining the three-dimensional coordinates corresponding to the target object based on the model parameters, and adjusting the standard three-dimensional model based on the three-dimensional coordinates, so as to provide a basis for obtaining the contour center line of the target object.
In one embodiment, the step of determining a contour centerline of the target object from the plurality of target feature points and the depth image comprises: carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object; and projecting the plurality of target characteristic points to an area where the two-dimensional plane contour is located, and obtaining a contour central line of the target object based on the projected target characteristic points.
As shown in fig. 4, a plurality of target feature points: 3012. 3502 and 3159, which are points along the spine of the back of the human body, when the terminal adjusts the three-dimensional model, the positions of the characteristic points are fixed, and the points along the spine of the back of the target object can be represented, as shown in fig. 5, the gray straight line represents the direction of the spine of the back, and in practical application, the spine along the line is different because the postures of the target object are different. Therefore, the terminal segments the depth image actually obtained to obtain a two-dimensional plane contour of the target object, then the terminal projects a plurality of target feature points to an area where the two-dimensional plane contour is located, a plurality of feature points are uniformly inserted among the feature points to obtain all feature points on a two-dimensional spine line of the target object, the feature points are connected, and the obtained connecting line is represented as the two-dimensional spine line of the target object, namely a contour central line of the target object.
In this embodiment, a two-dimensional plane contour of a target object is obtained by segmenting a depth image, then a plurality of target feature points are projected to an area where the two-dimensional plane contour is located, a contour center line of the target object is obtained based on the projected target feature points, and an actual contour center line of the target object can be accurately obtained, so that it is ensured that an area to be operated of the target object can be accurately obtained according to the contour center line.
In one embodiment, the target feature points are feature points on a central trunk of the target three-dimensional model, and the target feature points comprise a first feature point, a second feature point and a third feature point; the step of obtaining the contour center line of the target object based on the projected target feature points comprises the following steps: uniformly inserting a first number of feature points between the projected first feature point and the projected second feature point; uniformly inserting a second number of feature points between the second feature point and the third feature point after projection; and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
As shown in fig. 6, the first feature point after projection is a, the second feature point is B, and the third feature point is C, the terminal inserts 10 feature points, i.e., small black points in fig. 6, between AB, and inserts 4 feature points between BC, so that the connecting line between these feature points and a, B, C is the contour center line of the target object, as shown by the gray connecting line in fig. 6.
In this embodiment, a first number of feature points are uniformly inserted between the first feature point and the second feature point after projection, a second number of feature points are uniformly inserted between the second feature point and the third feature point after projection, and then a straight line formed by connecting the feature points and the target feature point after projection is used as a contour center line of the target object, so that an actual contour center line of the target object can be accurately obtained, and thus, it is ensured that an area to be operated of the target object can be accurately obtained according to the contour center line.
In one embodiment, the step of determining the region to be operated on of the target object based on the contour center line includes: acquiring a starting point and an end point of a target line segment of the contour central line; respectively acquiring a first straight line which passes through the starting point and is perpendicular to the target line segment, and a second straight line which passes through the end point and is perpendicular to the target line segment; determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex; determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex; and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
As shown in fig. 6, a start point and an end point of a target line segment of the contour center line are obtained, the target line segment is a line segment BC between a feature point B and a feature point C, the start point is a feature point B, the end point is a feature point C, a perpendicular line passing through the point B and serving as a terminal is taken as a first intersection point of the BC, a first intersection point of the BC intersection with the left side of the two-dimensional plane contour is taken as B1, a second intersection point of the right side of the two-dimensional plane contour is taken as B2, a midpoint of a connecting line between B2 and B is taken as a second target vertex B4, similarly, a perpendicular line passing through the point C is taken as a second target vertex B4, a third intersection point of the left side of the two-dimensional plane contour is taken as C1, a fourth intersection point of the right side of the two-dimensional plane contour is taken as C2, a midpoint of a connecting line between C1 and C is taken as a fourth target vertex C4, a rectangle formed by B3, B4, C3, and C4 is taken as a shadow region to be operated in the target object, as shown in fig. 6.
In this embodiment, the start point and the end point of the target line segment of the contour center line are obtained. And respectively acquiring a first straight line which passes through the starting point and is perpendicular to the target line segment and a second straight line which passes through the end point and is perpendicular to the target line segment, and obtaining a region to be operated of the target object based on the first straight line and the second straight line, so that the target region can be accurately obtained to control the mechanical arm to perform specified operation on the target region.
In one embodiment, as shown in fig. 7, the step of controlling the mechanical arm to perform a stiffness test on the region to be operated, and screening the region to be operated according to a stiffness test result to obtain a target region includes:
s702: and dividing the area to be operated according to a preset standard to obtain a plurality of grid areas.
The terminal divides the region to be operated according to a preset standard, for example, the region to be operated is evenly divided into a grid of 5x 5 to obtain a plurality of grid regions.
S704: and controlling the mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values.
The terminal controls the tail end of the mechanical arm to move to each grid position, the tail end of the mechanical arm is perpendicular to the plane of the back, the grid area of the back of the target object is pressed with given force, and the reaction force received by the tail end of the mechanical arm is recorded, namely the rigidity value.
S706: and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
The grid region of the muscle stiffness position is screened out as a target region according to a plurality of stiffness values.
In this embodiment, the area to be operated is divided according to the preset standard to obtain a plurality of grid areas, the mechanical arm is controlled to perform rigidity test on each grid area to obtain a plurality of rigidity values, the grid area corresponding to the rigidity value meeting the requirement of the target value in the plurality of rigidity values is used as the target area, and the target area can be accurately obtained, so that the mechanical arm is controlled to perform specified operation on the target area of the target object, and accurate operation on a specific part is realized.
In one embodiment, as shown in fig. 8, there is provided a massage method based on AI vision and robot control, the method comprising the steps of:
(1) And (3) acquiring an RGBD picture containing a prone posture human body in a visual field by using a depth camera.
(2) Selecting three points A, B and C along the back spine line from an SMPL human body three-dimensional model (template) by using a mouse; (2) and generating a rough human body three-dimensional model under a camera coordinate system.
(3) And (3) giving the positions of the key points of the human body, and optimizing the posture and the position of the three-dimensional model of the human body in the step (1) by combining the depth information in the RGBD picture to obtain the accurate three-dimensional model of the human body.
As shown in fig. 9, the human body 3D model obtained based on the algorithm is a rough model, i.e., a rough human body three-dimensional model in a camera coordinate system, and based on 2D human body key points, key points along the back spine are clicked with a mouse in the human body three-dimensional model, for example, three points a, B, and C are shown in the figure, and the rough human body 3D model is calibrated by three-dimensional coordinates of the key points, so as to obtain an accurate human body 3D model with optimized human body posture and position.
(4) And (4) obtaining the range of the human body contour in the RGB picture by a segmentation algorithm, projecting the three-dimensional coordinates of the three points A, B and C obtained in the step (3) to a two-dimensional plane, uniformly interpolating 10 and 4 points between AB and BC, and obtaining a straight line formed by connecting the points A, B and C as a spine.
(5) And (3) drawing a straight line vertical to BC through the point B, finding two points B1 and B2 at the joint of the human body and the background, finding a point C1 and a point C2 in the same way, finding the middle points of the point B and the point B1, the point B and the point B2, the point C and the point C1, and the point C2, namely four points B3, B4, C3 and C4 are the vertexes of a rectangle containing the massage area.
(6) Uniformly dividing a rectangular area defined by B3, B4, C3 and C4 into 5x 5 grids, controlling the tail end of the mechanical arm to move to each grid position and be vertical to the back plane, applying given force and recording the reaction force received by the tail end of the mechanical arm, namely the rigidity, wherein the reaction force at the bone position is the largest, the muscle stiffness position is the second, and the fat area is the smallest.
(7) And determining fatigue muscle areas according to the recorded reaction force, and determining a bone (spine) area, a muscle stiffness area and a fat area by combining the results of the visual algorithm.
(8) The mechanical arms are controlled to massage the muscle stiffness area and avoid the skeleton (spine) area.
In this embodiment, based on the AI vision module, a human body is first subjected to three-dimensional modeling, key points on the human body are automatically identified to obtain a position and an approximate range of a region to be massaged, then the mechanical arm is controlled to perform stiffness estimation on a massage region given by the vision module by using a force control technology to avoid sensitive regions such as a spine, and finally the mechanical arm performs acupuncture points and massage on the whole region.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an apparatus for controlling the operation of a robot arm, which is used for implementing the above method for controlling the operation of a robot arm. The solution of the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the apparatus for controlling the operation of the robot arm provided below can be referred to the limitations of the above method for controlling the operation of the robot arm, and are not described herein again.
In one embodiment, as shown in fig. 10, there is provided an apparatus for controlling the operation of a robot arm, including: the three-dimensional model acquisition module 10, the contour center line determination module 20, the region to be operated determination module 30, the target region screening module 40 and the mechanical arm operation module 50, wherein:
the three-dimensional model obtaining module 10 is configured to obtain a depth image of the target object in a preset posture, and obtain a target three-dimensional model of the target object according to the depth image;
the contour center line determining module 20 is configured to obtain a plurality of target feature points in the target three-dimensional model, and determine a contour center line of the target object according to the plurality of target feature points and the depth image;
the to-be-operated area determining module 30 is used for determining an to-be-operated area of the target object based on the contour center line;
the target area screening module 40 is used for controlling the mechanical arm to perform rigidity test on the area to be operated and screening the area to be operated according to a rigidity test result to obtain a target area;
and a robot arm operating module 50 for controlling the robot arm to perform a specified operation on the target area of the target object.
In one embodiment, the three-dimensional model acquisition module 10 includes: the three-dimensional model comprises a standard model obtaining unit, a model parameter obtaining unit, a three-dimensional coordinate obtaining unit and a three-dimensional model obtaining unit, wherein:
and the standard model acquisition unit is used for acquiring a standard three-dimensional model.
And the model parameter acquisition unit is used for acquiring the model parameters of the three-dimensional model of the target object according to the depth image.
And the three-dimensional coordinate acquisition unit is used for acquiring the three-dimensional coordinates corresponding to the target object based on the model parameters.
And the three-dimensional model acquisition unit is used for adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain the three-dimensional model of the target object.
In one embodiment, the contour centerline determination module 20 includes: a plane contour acquisition unit and a contour center line acquisition unit, wherein:
and the plane contour acquisition unit is used for carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object.
And the contour central line acquisition unit is used for projecting the plurality of target characteristic points to the area where the two-dimensional plane contour is located and obtaining the contour central line of the target object based on the projected target characteristic points.
In one embodiment, the target feature point is a feature point on a central trunk of the target three-dimensional model, the target feature point includes a first feature point, a second feature point and a third feature point, and the contour centerline obtaining unit includes: a first insertion subunit, a second insertion subunit, and a centerline determination subunit, wherein:
the first interpolation subunit is used for uniformly interpolating a first number of feature points between the projected first feature point and the projected second feature point;
the second inserting subunit is used for uniformly inserting a second number of feature points between the second feature point and the third feature point after projection;
and a central line determining subunit, configured to use a straight line formed by connecting the feature point and the projected target feature point as a contour central line of the target object.
In one embodiment, the to-be-operated region determining module 30 is further configured to obtain a start point and an end point of an end line segment of the contour center line; respectively acquiring a first straight line which passes through the starting point and is perpendicular to the tail end line segment, and a second straight line which passes through the end point and is perpendicular to the tail end line segment; determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex; determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex; and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
In one embodiment, target area filtering module 40 includes: a grid region segmentation unit, a rigidity value acquisition unit and a target region acquisition unit, wherein:
the grid area dividing unit is used for dividing the area to be operated according to a preset standard to obtain a plurality of grid areas;
the rigidity value acquisition unit is used for controlling the mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values;
and the target area acquisition unit is used for taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
The various modules in the above described apparatus for controlling the operation of a robotic arm may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a robot 1100 is provided, which may include a server, an internal structure of which may be as shown in fig. 11. The robot 1100 includes a processor 1102, memory, and a network interface 1108 connected by a system bus 1101. Wherein the processor 1102 of the robot 1100 is used to provide computing and control capabilities. The memory of the robot 1100 includes a nonvolatile storage medium 1103 and an internal memory 1107. The non-volatile storage medium 1103 stores an operating system 1104, computer programs 1105, and a database 116. The internal memory 1107 provides an environment for the operating system 1104 and the computer program 1105 in the nonvolatile storage medium 1103 to run. The database 1106 of the robot 1100 is used to store data. The network interface 1108 of the robot 1100 is used for communication with an external terminal via a network connection. The computer program 1105, when executed by the processor 1102, implements a method of controlling the operation of a robotic arm.
Those skilled in the art will appreciate that the structure shown in fig. 11 is a block diagram of only a portion of the structure associated with the present application, and does not limit the robots to which the present application may be applied, and a particular robot may include more or fewer components than those shown, or some components may be combined, or have a different arrangement of components.
In an embodiment, there is also provided a robot comprising a memory, a processor and a robot arm, the memory having stored therein a computer program which, when executed by the processor, performs the steps of the above method embodiments.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 12. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of controlling the operation of a robotic arm. The display unit of the computer device is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: acquiring a depth image of a target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image; acquiring a plurality of target characteristic points in a target three-dimensional model, and determining a contour central line of a target object according to the plurality of target characteristic points and the depth image; determining a region to be operated of the target object based on the contour center line; controlling the mechanical arm to perform rigidity test on the area to be operated, and screening the area to be operated according to a rigidity test result to obtain a target area; and controlling the mechanical arm to perform a specified operation on the target area of the target object.
In one embodiment, the obtaining of the target three-dimensional model of the target object from the depth image involved in the execution of the computer program by the processor comprises: acquiring a standard three-dimensional model; obtaining model parameters of a three-dimensional model of the target object according to the depth image; acquiring a three-dimensional coordinate corresponding to the target object based on the model parameters; and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain the three-dimensional model of the target object.
In one embodiment, the determining a contour centerline of the target object from the plurality of target feature points and the depth image involved in the execution of the computer program by the processor comprises: carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object; and projecting the plurality of target characteristic points to an area where the two-dimensional plane contour is located, and obtaining a contour central line of the target object based on the projected target characteristic points.
In one embodiment, the target feature points involved in the execution of the computer program by the processor are feature points on a central torso of the target three-dimensional model, the target feature points including a first feature point, a second feature point, and a third feature point; obtaining a contour center line of the target object based on the projected target feature points, comprising: uniformly inserting a first number of feature points between the projected first feature point and the projected second feature point; uniformly inserting a second number of feature points between the projected second feature point and the projected third feature point; and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
In one embodiment, the determining the region to be operated on of the target object based on the contour centerline involved when the processor executes the computer program comprises: acquiring a starting point and an end point of a target line segment of the contour central line; respectively acquiring a first straight line which passes through a starting point and is perpendicular to a target line segment, and a second straight line which passes through an end point and is perpendicular to the target line segment; determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex; determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex; and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
In one embodiment, the controlling the robot arm to perform the stiffness test on the to-be-operated area and screening the to-be-operated area according to the stiffness test result by the processor when executing the computer program includes: dividing the area to be operated according to a preset standard to obtain a plurality of grid areas; controlling a mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values; and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of: acquiring a depth image of a target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image; acquiring a plurality of target feature points in a target three-dimensional model, and determining a contour central line of a target object according to the plurality of target feature points and the depth image; determining a region to be operated of the target object based on the contour central line; controlling the mechanical arm to perform rigidity test on the area to be operated, and screening the area to be operated according to a rigidity test result to obtain a target area; and controlling the mechanical arm to perform a specified operation on the target area of the target object.
In one embodiment, the computer program, when executed by a processor, involves obtaining a target three-dimensional model of a target object from a depth image, comprising: acquiring a standard three-dimensional model; obtaining model parameters of a three-dimensional model of the target object according to the depth image; acquiring a three-dimensional coordinate corresponding to the target object based on the model parameters; and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain a three-dimensional model of the target object.
In one embodiment, the computer program, when executed by a processor, involves determining a contour centerline of a target object from a plurality of target feature points and a depth image, comprising: carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object; and projecting the plurality of target characteristic points to an area where the two-dimensional plane contour is located, and obtaining a contour central line of the target object based on the projected target characteristic points.
In one embodiment, the computer program when executed by the processor involves target feature points being feature points on a central torso of the target three-dimensional model, the target feature points including a first feature point, a second feature point, and a third feature point; obtaining a contour center line of the target object based on the projected target feature points, comprising: uniformly inserting a first number of feature points between the projected first feature point and the projected second feature point; uniformly inserting a second number of feature points between the projected second feature point and the projected third feature point; and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
In one embodiment, the computer program, when executed by a processor, involves determining a region of interest of the target object based on the contour centerline, comprising: acquiring a starting point and an end point of a target line segment of the contour central line; respectively acquiring a first straight line which passes through a starting point and is perpendicular to a target line segment, and a second straight line which passes through an end point and is perpendicular to the target line segment; determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex; determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex; and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
In one embodiment, the computer program, when executed by the processor, is configured to control the robotic arm to perform a stiffness test on a region to be operated, and to screen a target region from the region to be operated according to a stiffness test result, including: dividing the area to be operated according to a preset standard to obtain a plurality of grid areas; controlling a mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values; and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of: acquiring a depth image of a target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image; acquiring a plurality of target characteristic points in a target three-dimensional model, and determining a contour central line of a target object according to the plurality of target characteristic points and the depth image; determining a region to be operated of the target object based on the contour center line; controlling the mechanical arm to perform rigidity test on the area to be operated, and screening the area to be operated according to a rigidity test result to obtain a target area; and controlling the mechanical arm to perform a specified operation on the target area of the target object.
In one embodiment, the computer program, when executed by a processor, involves obtaining a target three-dimensional model of a target object from a depth image, comprising: acquiring a standard three-dimensional model; obtaining model parameters of a three-dimensional model of the target object according to the depth image; obtaining a three-dimensional coordinate corresponding to the target object based on the model parameters; and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain the three-dimensional model of the target object.
In one embodiment, the computer program, when executed by a processor, is directed to determining a contour centerline of a target object from a plurality of target feature points and a depth image, comprising: carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object; and projecting the plurality of target characteristic points to an area where the two-dimensional plane contour is located, and obtaining a contour central line of the target object based on the projected target characteristic points.
In one embodiment, the computer program when executed by the processor involves target feature points being feature points on a central torso of the target three-dimensional model, the target feature points including a first feature point, a second feature point, and a third feature point; obtaining a contour center line of the target object based on the projected target feature points, comprising: uniformly inserting a first number of feature points between the first feature points and the second feature points after projection; uniformly inserting a second number of feature points between the second feature point and the third feature point after projection; and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
In one embodiment, the computer program, when executed by a processor, involves determining a region of interest of the target object based on the contour centerline, comprising: acquiring a starting point and an end point of a target line segment of the contour central line; respectively acquiring a first straight line which passes through a starting point and is perpendicular to a target line segment, and a second straight line which passes through an end point and is perpendicular to the target line segment; determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of a connecting line between the second intersection point and the starting point as a second target vertex; determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour; taking the midpoint of a connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of a connecting line between the fourth intersection point and the end point as a fourth target vertex; and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as a to-be-operated area of the target object.
In one embodiment, the computer program, when executed by the processor, is configured to control the robotic arm to perform a stiffness test on a region to be operated, and to screen a target region from the region to be operated according to a stiffness test result, including: dividing the area to be operated according to a preset standard to obtain a plurality of grid areas; controlling a mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values; and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware that is instructed by a computer program, and the computer program may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of controlling operation of a robotic arm, the method comprising:
acquiring a depth image of a target object in a preset posture, and acquiring a target three-dimensional model of the target object according to the depth image;
acquiring a plurality of target feature points in the target three-dimensional model, and determining a contour central line of the target object according to the plurality of target feature points and the depth image;
determining a region to be operated of the target object based on the contour central line;
controlling the mechanical arm to carry out rigidity test on the area to be operated, and screening the area to be operated according to a rigidity test result to obtain a target area;
and controlling the mechanical arm to perform specified operation on the target area of the target object.
2. The method of claim 1, wherein obtaining the target three-dimensional model of the target object from the depth image comprises:
acquiring a standard three-dimensional model;
obtaining model parameters of a three-dimensional model of the target object according to the depth image;
acquiring three-dimensional coordinates corresponding to the target object based on the model parameters;
and adjusting the standard three-dimensional model based on the three-dimensional coordinates to obtain a three-dimensional model of the target object.
3. The method of claim 1, the determining a contour centerline of the target object from the plurality of target feature points and the depth image, comprising:
carrying out segmentation processing on the depth image to obtain a two-dimensional plane contour of the target object;
and projecting the plurality of target characteristic points to the area where the two-dimensional plane contour is located, and obtaining the contour central line of the target object based on the projected target characteristic points.
4. The method of claim 3, wherein the target feature points are feature points on a central torso of the target three-dimensional model, the target feature points including a first feature point, a second feature point, and a third feature point; the obtaining of the contour center line of the target object based on the projected target feature points includes:
uniformly inserting a first number of feature points between the projected first feature point and the projected second feature point;
uniformly inserting a second number of feature points between the projected second feature point and the projected third feature point;
and taking a straight line formed by connecting the characteristic point and the projected target characteristic point as a contour central line of the target object.
5. The method of claim 3, wherein said determining a region to be operated on of the target object based on the contour centerline comprises:
acquiring a starting point and an end point of a target line segment of the contour central line;
respectively acquiring a first straight line which passes through the starting point and is perpendicular to the target line segment, and a second straight line which passes through the end point and is perpendicular to the target line segment;
determining a first intersection point of the first straight line and the left side of the two-dimensional plane contour and a second intersection point of the first straight line and the right side of the two-dimensional plane contour;
taking the midpoint of the connecting line between the first intersection point and the starting point as a first target vertex, and taking the midpoint of the connecting line between the second intersection point and the starting point as a second target vertex;
determining a third intersection point of the second straight line and the left side of the two-dimensional plane contour and a fourth intersection point of the second straight line and the right side of the two-dimensional plane contour;
taking the midpoint of the connecting line between the third intersection point and the end point as a third target vertex, and taking the midpoint of the connecting line between the fourth intersection point and the end point as a fourth target vertex;
and taking a rectangle formed by the first target vertex, the second target vertex, the third target vertex and the fourth target vertex as an area to be operated of the target object.
6. The method according to claim 1, wherein the controlling the mechanical arm to perform rigidity test on the area to be operated and obtain a target area from the area to be operated by screening according to a rigidity test result, comprising:
dividing the region to be operated according to a preset standard to obtain a plurality of grid regions;
controlling a mechanical arm to carry out rigidity test on each grid area to obtain a plurality of rigidity values;
and taking the grid area corresponding to the rigidity value meeting the requirement of the target value in the rigidity values as the target area.
7. A robot comprising a memory, a processor and a robot arm, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 6 when executing the computer program.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202211582270.9A 2022-12-09 2022-12-09 Method, robot, device, medium and product for controlling operation of a robot arm Pending CN115972202A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211582270.9A CN115972202A (en) 2022-12-09 2022-12-09 Method, robot, device, medium and product for controlling operation of a robot arm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211582270.9A CN115972202A (en) 2022-12-09 2022-12-09 Method, robot, device, medium and product for controlling operation of a robot arm

Publications (1)

Publication Number Publication Date
CN115972202A true CN115972202A (en) 2023-04-18

Family

ID=85960379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211582270.9A Pending CN115972202A (en) 2022-12-09 2022-12-09 Method, robot, device, medium and product for controlling operation of a robot arm

Country Status (1)

Country Link
CN (1) CN115972202A (en)

Similar Documents

Publication Publication Date Title
CN111968235B (en) Object attitude estimation method, device and system and computer equipment
CN107392984A (en) A kind of method and computing device based on Face image synthesis animation
CN109584327B (en) Face aging simulation method, device and equipment
CN110148209B (en) Human body model generation method, image processing device and device with storage function
CN116977522A (en) Rendering method and device of three-dimensional model, computer equipment and storage medium
CN110020600A (en) Generate the method for training the data set of face alignment model
CN115100383B (en) Three-dimensional reconstruction method, device and equipment for mirror surface object based on common light source
CN113096249A (en) Method for training vertex reconstruction model, image reconstruction method and electronic equipment
CN116363308A (en) Human body three-dimensional reconstruction model training method, human body three-dimensional reconstruction method and equipment
CN113538682A (en) Model training method, head reconstruction method, electronic device, and storage medium
CN115042184A (en) Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
CN110176063B (en) Clothing deformation method based on human body Laplace deformation
CN115049744A (en) Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
CN114972634A (en) Multi-view three-dimensional deformable human face reconstruction method based on feature voxel fusion
CN117218300B (en) Three-dimensional model construction method, three-dimensional model construction training method and device
Moustakides et al. 3D image acquisition and NURBS based geometry modelling of natural objects
CN111339969B (en) Human body posture estimation method, device, equipment and storage medium
CN115908664B (en) Animation generation method and device for man-machine interaction, computer equipment and storage medium
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus
CN115972202A (en) Method, robot, device, medium and product for controlling operation of a robot arm
CN116079727A (en) Humanoid robot motion simulation method and device based on 3D human body posture estimation
US20210142563A1 (en) Method and system for generating a new anatomy
KR102668161B1 (en) Facial mesh deformation with fine wrinkles
CN114373040A (en) Three-dimensional model reconstruction method and acquisition terminal
CN114078181A (en) Human body three-dimensional model establishing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination