CN110091342B - Vehicle condition detection method and device and detection robot - Google Patents
Vehicle condition detection method and device and detection robot Download PDFInfo
- Publication number
- CN110091342B CN110091342B CN201910417671.0A CN201910417671A CN110091342B CN 110091342 B CN110091342 B CN 110091342B CN 201910417671 A CN201910417671 A CN 201910417671A CN 110091342 B CN110091342 B CN 110091342B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- detected
- detection
- mechanical arm
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 270
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000009826 distribution Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 13
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 6
- 238000007689 inspection Methods 0.000 description 30
- 238000004891 communication Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 5
- 239000003973 paint Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 239000000523 sample Substances 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60S—SERVICING, CLEANING, REPAIRING, SUPPORTING, LIFTING, OR MANOEUVRING OF VEHICLES, NOT OTHERWISE PROVIDED FOR
- B60S5/00—Servicing, maintaining, repairing, or refitting of vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a vehicle condition detection method, a vehicle condition detection device and a detection robot; wherein the method is applied to a controller of the detection robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the method comprises the following steps: receiving a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected; collecting image data of a vehicle to be detected; determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data; the movable device is controlled to move along the walking path, and the mechanical arm is controlled to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment. The invention can be suitable for detecting the vehicle conditions in the store with complex and changeable environments and various vehicle types.
Description
Technical Field
The present invention relates to the field of vehicle detection technologies, and in particular, to a vehicle condition detection method, device, and detection robot.
Background
In the related art, a fixed detection working area is generally adopted for detecting the vehicle condition, and a professional safety guardrail is arranged, so that the vehicle condition detection device is more suitable for environments such as a factory building and a production workshop, has higher requirements on detection environments, can detect fixed one or more vehicle types, is limited in detected vehicle types, and is difficult to be suitable for detecting the vehicle condition in a store with complex and changeable environments and various vehicle types.
Disclosure of Invention
The invention aims to provide a vehicle condition detection method, a device and a detection robot, which are suitable for detecting vehicle conditions in shops with complex and changeable environments and various vehicle types.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
In a first aspect, an embodiment of the present invention provides a vehicle condition detection method, where the method is applied to a controller for detecting a robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the method comprises the following steps: receiving a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected; collecting image data of a vehicle to be detected; determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data; the movable device is controlled to move along the walking path, and the mechanical arm is controlled to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment.
In some embodiments, before the step of acquiring image data of the vehicle to be detected, the method further includes: determining a position reference point of the vehicle to be detected according to the position information; the movable device is controlled to move from a preset starting point to a position reference point.
In some embodiments, map data of the current detection environment is stored in the controller in advance; the position information comprises a parking space number or a detection area; the step of determining the position reference point of the vehicle to be detected according to the position information comprises the following steps: determining a position reference point of a parking space corresponding to the position information from the map data; or determining the area range of the parking space corresponding to the position information from the map data; and identifying the position of the reserved reference object from the area range through the image pickup device, and determining a position reference point according to the position of the reserved reference object.
In some embodiments, the step of acquiring image data of the vehicle to be detected includes: collecting a front face image of a vehicle to be detected on a position reference point of the vehicle to be detected; controlling the movable device to move from the position reference point to a side shooting point of the parking space corresponding to the position reference point; and acquiring a side image of the vehicle to be detected at the side shooting point.
In some embodiments, the step of determining the walking path of the movable device and the walking path of the mechanical arm according to the position information and the image data includes: inputting the image data into a pre-trained type recognition model, and outputting the type of the vehicle to be detected; acquiring a pre-stored topological distribution of vehicle components corresponding to the type of the vehicle to be detected; determining an initial position of a part to be detected of the vehicle to be detected according to the topological distribution of the vehicle part; and determining the walking path of the movable device and the walking path of the mechanical arm according to the position information and the initial position of the part to be detected.
In some embodiments, the step of determining the walking path of the movable device includes: determining the distance between the walking path and the vehicle to be detected according to the initial position of the part to be detected and the arm length of the mechanical arm; the travel path of the movable apparatus is determined based on the distance, the direction of the preset travel route, and the position information of the vehicle to be detected.
In some embodiments, the step of determining the path of travel of the mechanical arm includes: acquiring a 3D scanning image of a vehicle to be detected through a 3D camera device; inputting the 3D scanning image and the initial position of the part to be detected into a pre-trained example segmentation model, and outputting a 3D network model of the part to be detected; collecting detection points from a 3D network model of a component to be detected according to a preset sampling rule, and calculating normal vectors of the detection points; and determining the walking path of the mechanical arm according to the detection point, the normal vector of the detection point and a preset mechanical arm control algorithm.
In some embodiments, the above method further comprises: if a plurality of vehicles to be detected are provided, determining a position reference point of the current detected vehicle according to the position information of the vehicles to be detected and the map data of the current detection environment; controlling the movable device to move to a position reference point of the current detection vehicle; executing the step of collecting the image data of the vehicle to be detected for the current detected vehicle; after the acquisition of the vehicle condition detection data of the current detection vehicle is completed, determining the next current detection vehicle from the vehicles to be detected according to the position information of the vehicles to be detected, and continuously executing the step of determining the position reference point of the target current vehicle until the detection of all the vehicles to be detected is completed.
In a second aspect, an embodiment of the present invention further provides a vehicle condition detection device, where the device is disposed in a controller of a detection robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the device comprises: the command receiving module is used for receiving a vehicle condition detection command; the vehicle condition detection instruction comprises position information of a vehicle to be detected; the data acquisition module is used for acquiring image data of the vehicle to be detected; the path determining module is used for determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data; the control module is used for controlling the movable device to move along the walking path and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment.
In a third aspect, an embodiment of the present invention provides a detection robot including a controller, a movable device, a mechanical arm, and a detection apparatus; the vehicle condition detection device is arranged in the controller; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment.
In the above manner, after the detection robot receives the vehicle condition detection instruction, the walking path of the movable device and the walking path of the mechanical arm can be determined according to the position information of the vehicle to be detected in the instruction and the acquired image data; and further controlling the movable device of the robot to move along the walking path, and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment. In this mode, carry out the vehicle condition through detecting the robot and detect the position, the position of walking route and arm is planned to the position that detects the robot can be based on waiting to detect the position and image data of vehicle, need not to set up professional detection work area, consequently can be applicable to the interior vehicle condition detection of shop that the environment is complicated changeable, the motorcycle type is various, is particularly useful for the vehicle condition detection of shop second handcart.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the embodiments of the invention.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a detection robot according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for detecting a vehicle condition according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a current detection environment according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of a moving path of a detection robot for acquiring image data of a vehicle to be detected according to an embodiment of the present invention;
FIG. 5 is a flowchart of another method for detecting a vehicle condition according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another inspection robot according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a walking path according to an embodiment of the present invention;
FIG. 8 is a schematic view of another walking path according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a vehicle condition detecting device according to an embodiment of the present invention;
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to adapt to the service scene of the second-hand vehicle detection, the embodiment provides a vehicle condition detection mode suitable for multiple environments such as a rechecking point, a sale store, a rear store and the like, and meanwhile, the method can adapt to multiple vehicle types. Based on this, the present embodiment performs vehicle condition detection by the detection robot. For ease of understanding, first, a inspection robot is provided, and as shown in fig. 1, the inspection robot 10 includes at least a controller 100, a movable device 101, and a robot arm 102; wherein the mechanical arm is carried on the movable device; the mechanical arm is provided with a detection device 103; the detection equipment can be arranged according to specific detection requirements, such as a paint film color difference detector, an orange peel detector and the like. The mechanical arm may be a six-axis mechanical arm, and the movable device may be AVG (Automated Guided Vehicle, automatic guided vehicle) or AMR (Automated Mobile Robot, automatic mobile robot) or the like. In addition, the detection robot may be further provided with a camera, a 3D scanning device, and the like.
Based on the inspection robot, referring to a flowchart of a vehicle condition inspection method shown in fig. 2, the method is applied to a controller of the inspection robot; the method comprises the following steps:
step S202, receiving a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected;
The vehicle condition detection instruction can be sent by a worker, specifically, a touch screen, a keyboard, a display screen and other man-machine interaction equipment can be arranged on the detection robot, and the worker sends the vehicle condition detection instruction through the man-machine interaction equipment on the detection robot; in another mode, the detection robot can be in communication connection with a terminal device, and a worker sends the vehicle condition detection instruction to the detection robot through the terminal device; in addition, the detection robot can be in communication connection with a control platform, and a worker sends the vehicle condition detection instruction to the detection robot through the control platform.
The position information of the vehicle to be detected may also have various forms, for example, a plurality of parking spaces may be defined in advance, and each parking space is numbered, and the position information of the vehicle to be detected may be the number of the parking space; in addition, based on the map of the current detection environment, the staff can define an area on the map, and vehicles in the area need to be detected in the vehicle condition, namely the position information of the vehicle to be detected.
Step S204, collecting image data of a vehicle to be detected;
If the detection robot is provided with a camera, the image data of the vehicle to be detected can be acquired by the camera; in another mode, the detection signals can be collected by other cameras arranged in the current detection environment and sent to the detection robot.
Step S206, determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data;
Step S208, the movable device is controlled to move along the walking path, and the mechanical arm is controlled to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment.
The walking path of the movable device can be understood as a walking path of the detection robot on the ground of the current detection environment; the controller can determine the position of each vehicle to be detected through the position information, can determine the parking direction of each vehicle to be detected, the specific positions of all components and the like through the image data, further determine the walking path of the detection robot from the current position, the walking path generally needs to cover all the positions of the components to be detected of the vehicle to be detected, and the shortest walking path can be planned in consideration of energy conservation.
The walking path of the mechanical arm can be understood as a walking path of the mechanical arm on the detection robot driving the detection equipment in the three-dimensional space; the controller can determine the parking direction of each vehicle to be detected and the specific position of each component through the image data, further determine one or more detection points of each vehicle to be detected based on the information, and determine the travelling path of the mechanical arm based on the positions of the detection points; the path of travel of the robotic arm is typically required to cover all positions where various inspection points on the vehicle are required to be inspected.
After the controller determines the walking path of the movable device and the walking path of the mechanical arm, the movable device can be controlled to walk according to the walking path of the movable device, meanwhile, the action of the mechanical arm is controlled according to the walking path of the mechanical arm, and in the action process of the mechanical arm, the detection equipment is started to collect relevant data.
In the above manner, after the detection robot receives the vehicle condition detection instruction, the walking path of the movable device and the walking path of the mechanical arm can be determined according to the position information of the vehicle to be detected in the instruction and the acquired image data; and further controlling the movable device of the robot to move along the walking path, and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment. In the mode, the detection robot is used for detecting the vehicle condition, and the detection robot can plan the walking path and the walking position of the mechanical arm based on the position and the image data of the vehicle to be detected without setting a professional detection working area, so that the detection robot is suitable for detecting the vehicle condition in a store with complex and changeable environment and various vehicle types.
In order to further improve the accuracy of the detection robot in determining the walking path of the movable device and the walking path of the mechanical arm, the robot needs to acquire accurate and clear image data, and therefore, the embodiment also provides another vehicle condition detection method.
Firstly, a plurality of parking spaces can be planned in advance in a current detection environment, one end of each parking space is provided with a position reference point, fig. 3 is an example, 10 parking spaces are planned in the detection environment in the example, and the left end of each parking space is provided with a corresponding position reference point. Meanwhile, the position information of each parking space and the coordinates of the position reference point of each parking space are stored in a controller of the detection robot. The detection robot determines a position reference point of the vehicle to be detected according to the position information in the vehicle condition detection instruction, and then controls the movable device to move from a preset starting point (such as a charging pile of the detection robot) to the position reference point; and after the position reference point is reached, the vehicle to be detected parked in the parking space is collected. By setting the position reference point corresponding to each vehicle to be detected, the detection robot is facilitated to acquire accurate and clear image data, and further accuracy of determining the walking path of the movable device and the walking path of the mechanical arm in the follow-up process is facilitated.
Map data of the current detection environment is also stored in the controller in advance, and can be obtained by scanning the current detection environment through a laser radar; limited by various detection environments, map data of possible detection environments may be high in accuracy, for example, may be accurate to 0.01 m, and map data of possible detection environments may be low in accuracy, for example, may be difficult to reach 0.05 m; the position reference point of the vehicle to be detected can be determined in different ways for different accuracies, as will be described in detail below.
As described in the above embodiment, the position information in the vehicle condition detection instruction may be a vehicle position number, which may be input by a worker, or may be a detection area, which may be defined by the worker on map data stored in advance. If the accuracy of the map data stored in the detection robot is higher, the position reference point of the parking space corresponding to the position information can be directly determined from the map data; the inspection robot can accurately reach the position reference point. The path of the detection robot from the starting point to the position reference point can be generated through a shortest distance cost function and physical constraints of the current detection environment (such as inherent obstacles, channels and the like in the environment) through a preset path planning algorithm.
When the accuracy of the map data stored in the inspection robot is low, if the position reference point is determined only from the map data, the inspection robot may reach the position reference point, or may reach a peripheral area other than the position reference point, and if the peripheral area is reached, it may be difficult for the inspection robot to accurately acquire image data of the vehicle to be inspected, for example, the acquired image data is incomplete, the skew, the image area occupied by the vehicle is small, and the like. Based on this, with continued reference to fig. 3, a reservation reference may be placed at a designated location relative to the location reference point, which may be placed directly in front of the location reference point or other designated location, or in the alternative, the reservation reference may be placed at the location reference point, for example, the reservation reference may be of a particular shape or color.
Based on the reserved reference object and the map data, the area range of the parking space corresponding to the position information can be determined from the map data, then the position of the reserved reference object is identified from the area range through the image data collected by the camera device or the point cloud data collected by the laser SLAM (simultaneous localization AND MAPPING, instant positioning and map construction), and the position reference point is determined according to the position of the reserved reference object. After the detection robot reaches the area range, the reserved reference objects in the area range can be positioned by adopting modes of image recognition, neural network recognition and the like, and then the position reference points are determined according to the position relation between the preset reserved reference objects and the position reference points.
It should be noted that, if the position information in the vehicle condition detection instruction includes a plurality of parking spaces, the detection robot may randomly select a position reference point of one of the parking spaces as the first position reference point to arrive. As an example, and referring to fig. 3, the position information in the vehicle condition detection instruction includes parking spaces 1 to 5, and the detection robot is parked at the starting point in the initial state, and at this time, the detection robot may first reach a position reference point corresponding to any one of the parking spaces 1 to 5, after the detection of the vehicle on the position reference point corresponding to the parking space is completed, then reach a position reference point corresponding to the next parking space, until all the vehicles to be detected in the parking spaces 1 to 5 are detected. Of course, for energy saving, when determining the walking route of the detection robot, the position reference point of the parking space closest to the current position of the detection robot can be reached first, and then the position reference points of other parking spaces can be reached in sequence. For example, when the detection robot is at the starting point in fig. 3, the position reference point of the parking space 1 can be reached first, then the position reference points of the following parking spaces are sequentially reached according to the sequence of the parking space 2, the parking space 3, the parking space 4 and the parking space 5, and the task of vehicle detection is completed.
Based on the above description, the step of the inspection robot collecting image data of the vehicle to be inspected can be achieved by the following steps 10 to 14;
step 10, acquiring a front face image of a vehicle to be detected on a position reference point of the vehicle to be detected;
After the detection robot reaches the position reference point, whether the vehicle to be detected is parked on the parking space corresponding to the position reference point may be further confirmed, specifically, the image may be collected by a camera device on the detection robot or by a camera installed in the current detection environment, and meanwhile, the vehicle information stored in the control platform is combined for confirmation.
Step 12, controlling the movable device to move from a position reference point to a side shooting point of a parking space corresponding to the position reference point;
and 14, acquiring a side image of the vehicle to be detected at the side shooting point.
In combination with fig. 4, after the detection robot reaches the position reference point of the parking space n, the front image of the vehicle parked on the parking space can be collected through the camera device on the detection robot, and then the side image of the vehicle parked on the parking space can be collected through the camera device after reaching the side shooting point of the parking space n along the dotted line path 1 in fig. 4. If the front image and the side image can meet the actual detection requirement, the photographing may be stopped at this time. In another way, if the back image of the vehicle is required to be collected, after the detection robot finishes collecting the side image of the vehicle, the detection robot can continue to reach the back shooting point of the parking space n along the dotted line path 2 in fig. 4, and the back image of the vehicle parked on the parking space is collected through the camera device.
The positions of the parking space, the position reference point, the side photographing point and the rear photographing point (if any) may be preset, so that the inspection robot may calculate the path 1 and the path 2 in fig. 4 according to the position coordinates of the position reference point, the side photographing point and the rear photographing point.
The mode of collecting the image data of the vehicle can obtain the vehicle image of each angle, and the image is accurately cleaned, so that the accuracy of the subsequent vehicle type recognition and the component recognition is improved.
The embodiment also provides another vehicle condition detection method; on the basis of acquiring the position information and the image data of the vehicle to be detected in the above embodiment, the present embodiment focuses on a specific implementation manner of determining the travel path of the movable device and the travel path of the mechanical arm.
After the image data of the vehicle to be detected is acquired, the image data can be firstly input into a pre-trained type recognition model, and the type of the vehicle to be detected is output; acquiring a pre-stored topological distribution of vehicle components corresponding to the type of the vehicle to be detected; determining the initial position of a part to be detected of the vehicle to be detected according to the topological distribution of the vehicle part; and finally, determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the initial position of the part to be detected.
The type identification model can be obtained through training a large number of sample images by using a neural network; the model may pre-divide the types of vehicles, including, for example, two-compartment vehicles, three-compartment vehicles, SUVs (sport utilityvehicle, sport utility vehicles), and the like. In general, for vehicles of the same type but different brands and vehicle types, the topological distribution of the vehicle parts is the same, and due to the limited types of the vehicles, the topological distribution of the vehicle parts of various vehicle types can be collected in advance, and the topological distribution of the vehicle parts can be specifically understood as the shape of each covering piece outside the vehicle, the position relationship among each covering piece and the like; the cover typically includes a front bumper, a rear bumper, a left front fender, a right front fender, a left rear fender, a right rear fender, a left front door, a left rear door, a right front door, a right rear door, an engine compartment cover, a trunk lid, a roof, a left a pillar, a right a pillar, a left C pillar, a right C pillar, a left cross member, a right cross member, and the like.
In addition, for various types of vehicles, in combination with actual detection requirements, components to be detected, such as a front bumper, a left front door, a right rear door, and the like, may be specified in advance; after the topological distribution of the vehicle parts of the vehicle type is obtained, the initial position of the part to be detected can be positioned from the image data based on the topological distribution of the vehicle parts; since the part to be detected has a certain area, the initial position of the part to be detected is understood as the position area where the part to be detected is located. Through the mode, the coarse positioning of the parts to be detected of various types of vehicles can be realized, and the method is suitable for detecting various vehicle types.
Based on the above description, reference is made to a flowchart of another vehicle condition detection method shown in fig. 5; before this, this embodiment also provides another specific structure of the inspection robot, referring to fig. 6, and the inspection robot further includes a 2D (2-dimensional) image pickup device 104, a 3D (3-dimensional) image pickup device 105, which may be specifically a 2D camera with a large field of view, and the 3D image pickup device may be specifically a binocular camera, an RGBD camera, or the like, based on the inspection robot shown in fig. 1. A mounting plate, actuator or clamp may be provided at the sixth axis wrist of the robot arm of the inspection robot to secure the inspection apparatus, and if the inspection apparatus is a contact inspection, the end of the inspection apparatus may be configured with a float or elastic unit to support end force feedback control. In addition, the detection robot may further include a man-machine interaction device 106, a power management module 107, a security pre-warning module 108, and the like.
The 2D imaging device 104 and the 3D imaging device 105 are mounted on the robot arm at the same time, and the 2D imaging device and the 3D imaging device need to be calibrated in advance so that the images acquired by the two devices correspond to each other. In another embodiment, only the 3D imaging device may be mounted on the arm of the inspection robot, and the 2D image may be acquired by the 3D imaging device, and in this case, the imaging field of view of the 3D imaging device needs to be enlarged. The 2D image and the 3D point cloud data (namely the 3D scanning image) are calibrated through camera parameters, so that the 2D image and the 3D point cloud data have a mapping relation.
The detection equipment can be in communication connection with the controller in a communication mode such as Bluetooth; the man-machine interaction device can be in communication connection with the controller through a TCP/IP (Transmission Control Protocol/Internet Protocol, transmission control protocol/Internet interconnection protocol) protocol; the 2D camera device, the 3D camera device, the movable device, the mechanical arm and the external mobile terminal can be in communication connection with the controller through an EtherNet/IP (EtherNet/Internet Protocol, internet protocol based on Ethernet) protocol; the external control platform may be communicatively coupled to the controller via WiFi.
Based on the detection robot, the following steps of the method are executed:
Step S502, receiving a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected;
Step S504, determining a position reference point of the vehicle to be detected according to the position information;
Step S506, controlling the movable device to move from the preset starting point to the position reference point.
Step S506, a front face image of the vehicle to be detected is acquired on a position reference point of the vehicle to be detected;
Step S508, controlling the movable device to move from the position reference point to the side shooting point of the parking space corresponding to the position reference point;
Step S510, acquiring a side image of the vehicle to be detected at the side photographing point.
Step S512, inputting the image data into a pre-trained type recognition model, and outputting the type of the vehicle to be detected;
Step S514, obtaining a pre-stored topological distribution of vehicle components corresponding to the type of the vehicle to be detected;
step S516, determining the initial position of a part to be detected of the vehicle to be detected according to the topological distribution of the vehicle part;
The above steps S502 to S516 have been described in the foregoing embodiments, and the steps of determining the travel path of the movable apparatus and the travel path of the robot arm are described next.
Step S518, determining the distance between the walking path and the vehicle to be detected according to the initial position of the part to be detected and the arm length of the mechanical arm;
step S520, determining a walking path of the movable apparatus based on the distance, the direction of the preset walking path, and the position information of the vehicle to be detected.
The steps S518 and S520 are used to determine the travel path of the movable apparatus, and it is understood that, in order to collect the relevant detection parameters of the to-be-detected component on the vehicle, the mechanical arm should move the detection device to a position close enough to the to-be-detected component, and since the arm length of the mechanical arm usually has a maximum value, the travel path of the detection robot should be close to the to-be-detected vehicle properly, and at the same time, it is necessary to ensure the safety of the detection robot and the to-be-detected vehicle, so as to avoid collision. In summary, a measurement experiment may be performed in advance, to determine a corresponding relationship between an initial position of the part to be detected, an arm length of the mechanical arm, and a distance between the walking path and the vehicle to be detected, and after the initial position of the part to be detected and the arm length of the mechanical arm in the current vehicle to be detected are obtained, the distance between the walking path and the vehicle to be detected may be found from the corresponding relationship. For example, the arm length after the mechanical arm is unfolded is 1.2 meters, and the distance may be set to 0.8 meters.
The direction of the preset travel route may be generally determined according to the side shooting point of the parking space, for example, if the side shooting point is located on the left side of the parking space, the direction of the preset travel route may be counterclockwise, and if the side shooting point is located on the right side of the parking space, the direction of the preset travel route may be clockwise, so as to minimize the travel distance of the detection robot.
As an example, fig. 7 shows a schematic view of a walking path; in this example, the front face image is only shot at the position reference point of the parking space n, after the detection robot moves to the side shooting point along the path 1, the side image is shot, then the detection robot reaches the upper left corner position of the parking space along the path 2, which may be the initial position of the walking path, and then walks around the vehicle to be detected in the counterclockwise direction according to the determined distance until reaching the upper left corner position of the parking space again, such as the path 3, the path 4, the path 5 and the path 6 in fig. 7. That is, the travel path of the movable device of the inspection robot can be understood as: starting from the initial position of the walking path, the path of the vehicle to be detected is surrounded by the determined distance in a counterclockwise or clockwise direction. The walking path determined in the mode has moderate distance with the vehicle, is beneficial to the walking of the mechanical arm and is convenient for collecting detection data.
In addition, to further improve the accuracy and clarity of the image data, a local feature marker, such as the "x" mark in fig. 7, may be provided on the vehicle to be detected. After the detection robot reaches the position reference point, the position of the robot is finely adjusted again according to the local feature mark, so that the position, the size and the like of the vehicle in the acquired image data are more standard, and the accuracy of the follow-up walking path and the walking path is improved.
Step S522, a 3D scanning image of a vehicle to be detected is acquired through a 3D camera device;
step S524, inputting the 3D scanning image and the initial position of the part to be detected into a pre-trained example segmentation model, and outputting a 3D network model of the part to be detected;
Step S526, collecting detection points from a 3D network model of the component to be detected according to a preset sampling rule, and calculating normal vectors of the detection points;
In step S528, the travel path of the robot arm is determined according to the detection point, the normal vector of the detection point, and the preset robot arm control algorithm.
In step S530, the movable device is controlled to move along the walking path, and the mechanical arm is controlled to move according to the walking path, so as to collect the vehicle condition detection data of the vehicle to be detected through the detection device.
The steps S522 to S528 are used for determining the travel path of the mechanical arm, and since the height of each component to be detected in the vehicle to be detected and the radian of the surface of the component affect the position of the detection device, the detection point needs to be further determined from the component to be detected through the 3D scan image of the vehicle to be detected, so as to determine the travel path of the mechanical arm.
Specifically, the example segmentation model can be obtained through training of a large number of sample images by using a neural network; in acquiring images for instance segmentation, it is generally required that the field of view of the 3D camera be larger than the largest part to be detected, and the sample image used for training the instance segmentation model may be a 3D scan image or may be a mixed data of a 2D image and a 3D scan image.
The initial position of the component to be detected can be marked on the image data of the vehicle to be detected, the image data and the 3D scanning image of the vehicle to be detected are input into the example segmentation model, and the 3D network model of each component to be detected marked on the 3D scanning image can be output. The sampling rule may include positions of detection points corresponding to the components to be detected, where the positions correspond to types of the vehicles. After the type of the vehicle to be detected and the 3D network model of each part to be detected are determined, the position of the corresponding detection point can be found out from the sampling rule. In addition, the 3D network model contains the position information of each point on the surface of the part to be detected, and the normal vector of the detection point can be obtained by solving the partial derivative of the detection point. The normal vector may be used to determine the pose of the detection device, e.g. for a particular detection item, it is necessary that the detection sensing means on the detection device lie along the normal vector direction of the detection point.
Taking paint film detection as an example, a detection device arranged on a sixth shaft of the mechanical arm is a paint film instrument, a probe of the paint film instrument collects data on a detection point, and the collected data can be transmitted back to a control platform in communication connection with the detection robot in real time.
In the above manner, the detection points can be used for determining the positions to be reached by the detection equipment, the normal vectors of the detection points are used for determining the gestures of the detection equipment, and then the walking path of the mechanical arm is determined through the mechanical arm control algorithm, the walking path generally needs to cover the positions of all the detection points on the vehicle to be detected, and at each detection point, the gestures of the detection equipment meet the normal vectors of the detection points. According to the method, the accurate positioning of the part to be detected is realized through the 3D scanning image, and the accuracy of vehicle condition detection is improved.
The present embodiment also provides another vehicle condition detection method, which focuses on describing a vehicle condition detection method when a plurality of vehicles to be detected are included in the position information of the vehicles to be detected. If a plurality of vehicles to be detected are provided, determining a position reference point of the current detected vehicle according to the position information of the vehicles to be detected and the map data of the current detection environment; controlling the movable device to move to a position reference point of the current detection vehicle; and aiming at the current detected vehicle, executing the step of acquiring the image data of the current vehicle to be detected until the acquisition of the vehicle condition detection data of the current detected vehicle is completed. And then determining the next current detection vehicle from the vehicles to be detected according to the position information of the vehicles to be detected, and continuously executing the step of determining the position reference point of the target current vehicle until the detection of all the vehicles to be detected is completed.
Fig. 8 shows an example of three vehicles to be detected, which are parked on the parking spaces 1,2, and 3, respectively. The detection robot firstly reaches a position reference point of the parking space 1, starts to acquire image data, 3D scanning images and the like of the vehicle to be detected, further determines a walking path of the movable device and a walking path of the mechanical arm, then starts to execute the walking path of the movable device and the walking path of the mechanical arm from the upper left corner of the parking space 1, and after reaching the upper left corner again, the detection of the current vehicle to be detected is finished, then moves to the position reference point of the parking space 2, and completes the detection of the vehicle to be detected on the parking space 2 according to the same steps, and the detection of the vehicle to be detected on the parking space 3 is completed in the same way. When all vehicles are detected, the detection robot can return to the starting point for charging.
In addition, in order to further improve the safety of the inspection robot, the inspection robot is generally required to be provided with an obstacle avoidance inspection and collision protection function, a function of performing safety prompt and setting the safety state after a person enters the inspection, a function of restricting the operation state of the inspection robot in a human-machine coexistence area (such as setting an upper limit of a moving speed, etc.), a function of performing human-machine interaction between the inspection robot and a mobile terminal of a worker, and the like.
Corresponding to the embodiment of the above-described vehicle condition detection method, reference is made to a schematic structural diagram of a vehicle condition detection device shown in fig. 9; the device is arranged on a controller of the detection robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the device comprises:
an instruction receiving module 90, configured to receive a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected;
a data acquisition module 91 for acquiring image data of a vehicle to be detected;
A path determining module 92, configured to determine a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data;
The control module 93 is used for controlling the movable device to move along the walking path and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment.
In the vehicle condition detection device, after the detection robot receives the vehicle condition detection instruction, the walking path of the movable device and the walking path of the mechanical arm can be determined according to the position information of the vehicle to be detected in the instruction and the acquired image data; and further controlling the movable device of the robot to move along the walking path, and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment. In the mode, the detection robot is used for detecting the vehicle condition, and the detection robot can plan the walking path and the walking position of the mechanical arm based on the position and the image data of the vehicle to be detected without setting a professional detection working area, so that the detection robot is suitable for detecting the vehicle condition in a store with complex and changeable environment and various vehicle types.
Further, the apparatus further includes: the first reference point determining module is used for determining a position reference point of the vehicle to be detected according to the position information; the first movement control module is used for controlling the movable device to move from a preset starting point to a position reference point.
Further, map data of the current detection environment is stored in the controller in advance; the position information comprises a parking space number or a detection area; the above reference point determining module is further configured to: determining a position reference point of a parking space corresponding to the position information from the map data; or determining the area range of the parking space corresponding to the position information from the map data; and identifying the position of the reserved reference object from the area range through the image pickup device, and determining a position reference point according to the position of the reserved reference object.
Further, the data acquisition module 91 is further configured to: collecting a front face image of a vehicle to be detected on a position reference point of the vehicle to be detected; controlling the movable device to move from the position reference point to a side shooting point of the parking space corresponding to the position reference point; and acquiring a side image of the vehicle to be detected at the side shooting point.
Further, the path determining module 92 is further configured to: inputting the image data into a pre-trained type recognition model, and outputting the type of the vehicle to be detected; acquiring a pre-stored topological distribution of vehicle components corresponding to the type of the vehicle to be detected; determining an initial position of a part to be detected of the vehicle to be detected according to the topological distribution of the vehicle part; and determining the walking path of the movable device and the walking path of the mechanical arm according to the position information and the initial position of the part to be detected.
Further, the path determining module 92 is further configured to: determining the distance between the walking path and the vehicle to be detected according to the initial position of the part to be detected and the arm length of the mechanical arm; the travel path of the movable apparatus is determined based on the distance, the direction of the preset travel route, and the position information of the vehicle to be detected.
Further, the path determining module 92 is further configured to: acquiring a 3D scanning image of a vehicle to be detected through a 3D camera device; inputting the 3D scanning image and the initial position of the part to be detected into a pre-trained example segmentation model, and outputting a 3D network model of the part to be detected; collecting detection points from a 3D network model of a component to be detected according to a preset sampling rule, and calculating normal vectors of the detection points; and determining the walking path of the mechanical arm according to the detection point, the normal vector of the detection point and a preset mechanical arm control algorithm.
Further, the apparatus further includes: the second reference point determining module is used for determining the position reference point of the current detection vehicle according to the position information of the vehicle to be detected and the map data of the current detection environment if the number of the vehicles to be detected is multiple; the second movement control module is used for controlling the movable device to move to a position reference point of the current detection vehicle; the execution module is used for executing the step of collecting the image data of the vehicle to be detected aiming at the current detected vehicle; after the acquisition of the vehicle condition detection data of the current detection vehicle is completed, determining the next current detection vehicle from the vehicles to be detected according to the position information of the vehicles to be detected, and continuously executing the step of determining the position reference point of the target current vehicle until the detection of all the vehicles to be detected is completed.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for brevity, reference may be made to the corresponding content in the foregoing method embodiment for a part of the description of the device embodiment that is not mentioned.
The embodiment also provides a detection robot, which comprises a controller, a movable device, a mechanical arm and detection equipment; the vehicle condition detection device is arranged in the controller; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment.
The present embodiment also provides an electronic device, referring to a schematic structural diagram of an electronic device shown in fig. 10, where the electronic device includes a memory 1000 and a processor 1001; the memory 1000 is a machine-readable storage medium, and is used to store one or more computer instructions, where the one or more computer instructions are executed by the processor to implement the steps of the vehicle condition detection method. The inspection robot described above may be implemented with reference to the electronic device shown in fig. 10, or may have more or fewer components than the electronic device shown in fig. 10, and is not limited herein.
Further, the electronic device shown in fig. 10 further includes a bus 1002 and a communication interface 1003, and the processor 1001, the communication interface 1003, and the memory 1000 are connected by the bus 1002.
The memory 1000 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 1003 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 1002 may be an ISA bus, a PCI bus, or an EISA bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 10, but not only one bus or type of bus.
The processor 1001 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 1001 or by instructions in the form of software. The processor 1001 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The methods, steps and logic blocks of the invention in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method according to the present invention may be directly embodied in a hardware decoding processor or may be performed by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 1000, and the processor 1001 reads information in the memory 1000, and in combination with its hardware, performs the steps of the method of the foregoing embodiment.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiment, which is not described in detail herein.
The computer program product of the vehicle condition detection method, device and system provided by the embodiments of the present invention includes a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be repeated herein.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. A vehicle condition detection method, characterized in that the method is applied to a controller for detecting a robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the method comprises the following steps:
Receiving a vehicle condition detection instruction; the vehicle condition detection instruction comprises position information of a vehicle to be detected; wherein the vehicle to be detected is a vehicle parked in a store;
collecting image data of the vehicle to be detected;
Determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data;
Controlling the movable device to move along the walking path, and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment;
before the step of acquiring the image data of the vehicle to be detected, the method further includes:
Determining a position reference point of the vehicle to be detected according to the position information;
controlling the movable device to move from a preset starting point to the position reference point;
the step of collecting the image data of the vehicle to be detected comprises the following steps:
Acquiring a front face image of the vehicle to be detected on a position reference point of the vehicle to be detected;
Controlling the movable device to move from the position reference point to a side shooting point of a parking space corresponding to the position reference point;
Acquiring a side image of the vehicle to be detected at the side shooting point;
Determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data, wherein the step comprises the following steps:
Inputting the image data into a pre-trained type recognition model, and outputting the type of the vehicle to be detected;
acquiring pre-stored vehicle part topology distribution corresponding to the type of the vehicle to be detected;
Determining an initial position of a part to be detected of the vehicle to be detected according to the topological distribution of the vehicle part;
Determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the initial position of the part to be detected;
A step of determining a travel path of the movable apparatus, comprising:
determining the distance between a walking path and the vehicle to be detected according to the initial position of the part to be detected and the arm length of the mechanical arm;
Determining a walking path of the movable device based on the distance, the direction of a preset walking route and the position information of the vehicle to be detected; the direction of the preset walking route is determined according to the side shooting points of the parking space; if the side shooting point is positioned on the left side of the parking space, the direction of the preset walking route is anticlockwise, and if the side shooting point is positioned on the right side of the parking space, the direction of the preset walking route is clockwise.
2. The method of claim 1, wherein map data of a current detection environment is stored in advance in the controller; the position information comprises a parking space number or a detection area;
The step of determining the position reference point of the vehicle to be detected according to the position information comprises the following steps:
determining a position reference point of a parking space corresponding to the position information from the map data;
Or determining the regional range of the parking space corresponding to the position information from the map data; and identifying the position of the reserved reference object from the area range through the camera device, and determining a position reference point according to the position of the reserved reference object.
3. The method of claim 1, wherein the step of determining the travel path of the robotic arm comprises:
Acquiring a 3D scanning image of the vehicle to be detected through a 3D camera device;
inputting the 3D scanning image and the initial position of the part to be detected into a pre-trained example segmentation model, and outputting a 3D network model of the part to be detected;
collecting detection points from a 3D network model of the component to be detected according to a preset sampling rule, and calculating normal vectors of the detection points;
And determining a walking path of the mechanical arm according to the detection point, the normal vector of the detection point and a preset mechanical arm control algorithm.
4. The method according to claim 1, wherein the method further comprises:
if a plurality of vehicles to be detected are provided, determining a position reference point of the current detection vehicle according to the position information of the vehicles to be detected and the map data of the current detection environment;
Controlling the movable device to move to a position reference point of the current detection vehicle;
executing the step of acquiring the image data of the vehicle to be detected aiming at the current detection vehicle;
And after the vehicle condition detection data of the current detection vehicle are acquired, determining the next current detection vehicle from the vehicles to be detected according to the position information of the vehicles to be detected, and continuously executing the step of determining the position reference point of the target current vehicle until all the detection of the vehicles to be detected is completed.
5. A vehicle condition detecting apparatus employing the vehicle condition detecting method according to claim 1, wherein the apparatus is provided to a controller of a detecting robot; the detection robot further comprises a movable device and a mechanical arm; the mechanical arm is carried on the movable device; the mechanical arm is provided with detection equipment; the device comprises:
The command receiving module is used for receiving a vehicle condition detection command; the vehicle condition detection instruction comprises position information of a vehicle to be detected; wherein the vehicle to be detected is a vehicle parked in a store;
the data acquisition module is used for acquiring the image data of the vehicle to be detected;
the path determining module is used for determining a walking path of the movable device and a walking path of the mechanical arm according to the position information and the image data;
The control module is used for controlling the movable device to move along the walking path and controlling the mechanical arm to act according to the walking path so as to acquire vehicle condition detection data of the vehicle to be detected through the detection equipment;
the apparatus further comprises: a position reference point determining module; the position reference point determining module is used for:
Determining a position reference point of the vehicle to be detected according to the position information;
And controlling the movable device to move from a preset starting point to the position reference point.
6. A detection robot, characterized in that the detection robot comprises a controller, a movable device, a mechanical arm and a detection apparatus; the apparatus of claim 5 disposed in the controller;
the mechanical arm is carried on the movable device; and the mechanical arm is provided with detection equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910417671.0A CN110091342B (en) | 2019-05-20 | 2019-05-20 | Vehicle condition detection method and device and detection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910417671.0A CN110091342B (en) | 2019-05-20 | 2019-05-20 | Vehicle condition detection method and device and detection robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110091342A CN110091342A (en) | 2019-08-06 |
CN110091342B true CN110091342B (en) | 2024-04-26 |
Family
ID=67448585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910417671.0A Active CN110091342B (en) | 2019-05-20 | 2019-05-20 | Vehicle condition detection method and device and detection robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110091342B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110780308B (en) * | 2019-10-28 | 2020-11-24 | 西南科技大学 | Three-dimensional point cloud data acquisition system and method under turbid water environment |
CN111192189A (en) * | 2019-12-27 | 2020-05-22 | 中铭谷智能机器人(广东)有限公司 | Three-dimensional automatic detection method and system for automobile appearance |
CN112858290A (en) * | 2021-01-08 | 2021-05-28 | 北京中车重工机械有限公司 | Detection system based on digital image processing and detection method and device thereof |
CN114216496A (en) * | 2021-11-03 | 2022-03-22 | 中科合肥技术创新工程院 | Intelligent detection method for functional indexes of intelligent toilet |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016095490A1 (en) * | 2014-12-17 | 2016-06-23 | 苏州华兴致远电子科技有限公司 | Vehicle operation fault detection system and method |
CN106468914A (en) * | 2015-08-21 | 2017-03-01 | 苏州华兴致远电子科技有限公司 | Train overhaul method and system |
CN206029901U (en) * | 2015-08-21 | 2017-03-22 | 苏州华兴致远电子科技有限公司 | Train overhaul machine people |
WO2018036277A1 (en) * | 2016-08-22 | 2018-03-01 | 平安科技(深圳)有限公司 | Method, device, server, and storage medium for vehicle detection |
CN208675408U (en) * | 2018-08-30 | 2019-03-29 | 北京酷车易美网络科技有限公司 | Vehicle vehicle condition intelligent detection device and system |
CN109730590A (en) * | 2019-01-30 | 2019-05-10 | 深圳飞科机器人有限公司 | Clean robot and the method for clean robot auto-returned charging |
-
2019
- 2019-05-20 CN CN201910417671.0A patent/CN110091342B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016095490A1 (en) * | 2014-12-17 | 2016-06-23 | 苏州华兴致远电子科技有限公司 | Vehicle operation fault detection system and method |
CN106468914A (en) * | 2015-08-21 | 2017-03-01 | 苏州华兴致远电子科技有限公司 | Train overhaul method and system |
CN206029901U (en) * | 2015-08-21 | 2017-03-22 | 苏州华兴致远电子科技有限公司 | Train overhaul machine people |
WO2018036277A1 (en) * | 2016-08-22 | 2018-03-01 | 平安科技(深圳)有限公司 | Method, device, server, and storage medium for vehicle detection |
CN208675408U (en) * | 2018-08-30 | 2019-03-29 | 北京酷车易美网络科技有限公司 | Vehicle vehicle condition intelligent detection device and system |
CN109730590A (en) * | 2019-01-30 | 2019-05-10 | 深圳飞科机器人有限公司 | Clean robot and the method for clean robot auto-returned charging |
Also Published As
Publication number | Publication date |
---|---|
CN110091342A (en) | 2019-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110091342B (en) | Vehicle condition detection method and device and detection robot | |
CN110775052B (en) | Automatic parking method based on fusion of vision and ultrasonic perception | |
EP3413289B1 (en) | Automatic driving control device, vehicle, and automatic driving control method | |
CN110530372B (en) | Positioning method, path determining device, robot and storage medium | |
KR101881415B1 (en) | Apparatus and method for location of moving objects | |
WO2018181974A1 (en) | Determination device, determination method, and program | |
EP3939863A1 (en) | Overhead-view image generation device, overhead-view image generation system, and automatic parking device | |
Kim et al. | Sensor fusion algorithm design in detecting vehicles using laser scanner and stereo vision | |
CN107836017B (en) | Semaphore identification device and semaphore recognition methods | |
CN112132896B (en) | Method and system for detecting states of trackside equipment | |
CN110293965B (en) | Parking method and control device, vehicle-mounted device and computer readable medium | |
CN112740274A (en) | System and method for VSLAM scale estimation on robotic devices using optical flow sensors | |
JP5775965B2 (en) | Stereo camera system and moving body | |
CN109857112A (en) | Obstacle Avoidance and device | |
CN111947644B (en) | Outdoor mobile robot positioning method and system and electronic equipment thereof | |
JP2016080460A (en) | Moving body | |
CN112771591B (en) | Method for evaluating the influence of an object in the environment of a vehicle on the driving maneuver of the vehicle | |
JP2018063476A (en) | Apparatus, method and computer program for driving support | |
JP7233386B2 (en) | Map update device, map update system, and map update method | |
CN112447058B (en) | Parking method, parking device, computer equipment and storage medium | |
KR20210058640A (en) | Vehicle navigaton switching device for golf course self-driving cars | |
JP6834401B2 (en) | Self-position estimation method and self-position estimation device | |
EP3795952A1 (en) | Estimation device, estimation method, and computer program product | |
CN113158779A (en) | Walking method and device and computer storage medium | |
KR20160125803A (en) | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |