CN116321053A - Loader operation guiding system for non-line-of-sight remote control driving - Google Patents

Loader operation guiding system for non-line-of-sight remote control driving Download PDF

Info

Publication number
CN116321053A
CN116321053A CN202211609915.3A CN202211609915A CN116321053A CN 116321053 A CN116321053 A CN 116321053A CN 202211609915 A CN202211609915 A CN 202211609915A CN 116321053 A CN116321053 A CN 116321053A
Authority
CN
China
Prior art keywords
loader
data
vehicle
module
guide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211609915.3A
Other languages
Chinese (zh)
Inventor
刘伟
唐蕾
祝亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Mangma Zhixing Technology Co ltd
Original Assignee
Guangdong Mangma Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Mangma Zhixing Technology Co ltd filed Critical Guangdong Mangma Zhixing Technology Co ltd
Priority to CN202211609915.3A priority Critical patent/CN116321053A/en
Publication of CN116321053A publication Critical patent/CN116321053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Electromagnetism (AREA)
  • Operation Control Of Excavators (AREA)

Abstract

The application relates to a loader operation guiding system for non-line-of-sight remote control driving, which belongs to the technical field of loader operation guiding and comprises a vehicle-mounted terminal, a vehicle-mounted ring sensing module, a wireless communication module, a man-machine interaction module and a guiding control module. According to the technical scheme, the vehicle data and the vehicle-mounted sensing data are collected in real time through the vehicle-mounted terminal and the vehicle-mounted ring sensing module, the wireless communication module uploads the data to the human-computer interaction module to construct a guide model, the operation task and the operation target of the loader are set, and the operation task and the operation target are issued to the guide control module of the loader to guide the loader to operate, so that the operation intensity of a driver is reduced. The man-machine interaction module also collects the guided operation data of the loader and analyzes the guided operation data, a digital model is built according to the guided operation data, and the real loader operation scene is simulated by combining the large data analysis result, so that the influence of the environmental factors in the visual field on the driver is reduced, the driving fatigue is slowed down, and the driving experience is improved.

Description

Loader operation guiding system for non-line-of-sight remote control driving
Technical Field
The application belongs to the technical field of loader operation guidance, and particularly relates to a loader operation guidance system for non-line-of-sight remote control driving.
Background
Metallurgy means mining, selecting, sintering metal ores and smelting and processing the metal ores into metal materials. Because of the specificity of the industry, most of the working surfaces in the working scene are special engineering machinery vehicles driven manually, and the working with high risk is carried out under the special working conditions. Such as slag removal under a furnace, coal yard piling, coking asphalt treatment, cabin cleaning and the like, and the working environment has a plurality of operation problems such as high dust, heat radiation, toxic and harmful gas, strong acid and strong corrosion, limited visual field, personal injury and the like.
When the prior art scheme is used for solving the problem of unmanned application in the working surface, the remote control of the working vehicle is usually carried out through a network, but the existing remote control driving system needs a driver to actively carry out remote control driving operation, has high operation intensity, is easily influenced by factors of the visual field range in the operation process of the driver, is easy to cause driver fatigue, has poor driving experience, and further causes the problem of lower working efficiency.
Disclosure of Invention
Therefore, the loader operation guiding system for non-line-of-sight remote control driving is beneficial to solving the problems that the operation intensity is high, the operation efficiency is low, the driver is easily influenced by the factors of the visual field range, the driver is easy to fatigue, and the driving experience is poor when the current driver carries out remote control loading operation.
In order to achieve the above purpose, the present application adopts the following technical scheme:
the application provides a loader work guidance system for non-line-of-sight remote control driving, comprising:
the vehicle-mounted terminal is used for collecting vehicle data of the loader;
the vehicle-mounted ring sensing module is used for collecting vehicle-mounted sensor data of the loader;
the wireless communication module is used for carrying out networking wireless communication with the man-machine interaction module of the cockpit, and the wireless communication module sends the vehicle-mounted sensor data and the vehicle data to the man-machine interaction module and receives the operation task and the operation target of the man-machine interaction module;
the man-machine interaction module is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data, setting an operation task and an operation target, collecting guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating an operation scene of the real loader according to a large data analysis result, and performing operation evaluation on the guide operation process according to the digital model;
and the guiding control module is used for receiving the operation task of the man-machine interaction module and guiding the loader to operate by the operation target.
Further, the man-machine interaction module specifically comprises a touch screen, an operation unit, a sensor calibration unit and a task interaction unit;
the touch screen is connected with the operation unit and used for displaying vehicle-mounted sensor data, vehicle data and a working scene of the loader and receiving an operation instruction input by a driver;
the operation unit is respectively connected with the touch screen, the sensor calibration unit and the task interaction unit, and is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data by adopting a multiple data composite superposition conversion mode and defining a guide model function;
the sensor calibration unit is used for pre-calibrating the vehicle-mounted ring sensor module, and the pre-calibrated information comprises: vehicle basic attribute information, attitude reference point information, obstacle classification information, and guide line scale information;
the task interaction module is used for prefabricating a work task and a work target of the loader according to the guide model function, wherein the work task and the work target comprise a work task type, a work task angle range, a work task duration, a work task grade and an evaluation rule;
the operation unit is also used for collecting and storing the guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating the operation scene of the real loader according to the large data analysis result, and performing operation evaluation on the guide operation process according to the digital model.
Further, the vehicle-mounted environmental sensing module comprises an inclinometer, an angle encoder, a laser radar, an inertial measurement unit, an AI camera, a weighing valve block and a positioning module;
the inclinometer is provided with two inclinometers which are respectively arranged at the movable arm and the rotating bucket of the loader and used for collecting the attitude information of the movable arm and the rotating bucket;
the angle encoder is arranged at the hinged position of the loader body and used for collecting the relative attitude angles of the front and rear loader bodies;
the laser radar is provided with three laser point cloud data which are respectively arranged at the left side, the right side and the rear of the cab of the loader and are used for collecting the laser point cloud data at the left side, the right side and the rear of the loader;
the inertial measurement unit is arranged at the position of a headlight of a front vehicle body of the loader and is used for collecting vehicle posture information of the loader;
the AI cameras are respectively arranged on the left side, the front side, the right side and the rear side above the cab of the loader and are used for collecting 360-degree looking-around image data around the loader;
the weighing valve block is arranged on a hydraulic oil path of the loader tool and used for collecting the weight of loading and unloading materials of the loader tool;
the positioning module is arranged at the top of the cab of the loader and used for collecting positioning data of the loader in an open scene.
Further, the vehicle-mounted sensor data comprise a movable arm inclination angle, a rotating bucket inclination angle, a hinge rotation angle, laser point cloud data, inertial measurement attitude data, looking-around image data, material weight, positioning data, elevation and course information; the vehicle data includes engine speed, ignition status, turn signal status, high and low beam status, park status, gear status, wiper status, horn status, cooling fan status, and gearbox power status.
Further, the wireless communication module is arranged at the top of the cab of the loader, and a TCP/IP protocol is adopted to establish a communication mechanism for wireless communication; the wireless communication module is broadband ad hoc network equipment or a 4G/5G wireless router.
Further, the collecting loader guiding operation data and analyzing big data of the guiding operation data specifically includes: in the guiding operation process of the loader, the human-computer interaction module collects real-time vehicle-mounted sensor data by using the vehicle-mounted ring sensing module, collects driver operation records by using the touch screen, and establishes a time sequence database to store the vehicle-mounted sensor data and the driver operation records;
and performing data mining on the vehicle-mounted sensor data and the driver operation records stored in the time sequence database by using a data mining algorithm, and establishing a corresponding relation sequence between the driver operation records and the running state of the loader vehicle.
Further, the constructing the digital model according to the guiding operation data specifically includes:
and 3D digital twin visual digital model is constructed on the 3D map by utilizing the movable arm inclination angle, the rotating bucket inclination angle, the hinging rotation angle, the laser point cloud data, the inertial measurement attitude data, the looking-around image data, the material weight and the positioning data in the vehicle-mounted sensor data and combining the high-definition three-dimensional map of the working surface of the loader to perform three-dimensional coordinate conversion.
Further, the guiding model functions comprise one-key leveling, one-key lifting, material pile identification, route planning, blind area detection, safety early warning, autonomous loading and unloading and operation statistics.
The application adopts the technical scheme, possesses following beneficial effect at least:
the loader operation guiding system for non-line-of-sight remote control driving comprises a vehicle-mounted terminal and a control system, wherein the vehicle-mounted terminal is used for acquiring vehicle data of a loader; the vehicle-mounted ring sensing module is used for collecting vehicle-mounted sensor data of the loader; the wireless communication module is used for carrying out networking wireless communication with the man-machine interaction module of the cockpit, and the wireless communication module sends the vehicle-mounted sensor data and the vehicle data to the man-machine interaction module and receives the operation task and the operation target of the man-machine interaction module; the man-machine interaction module is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data, setting an operation task and an operation target, collecting guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating an operation scene of the real loader according to a large data analysis result, and performing operation evaluation on the guide operation process according to the digital model; and the guiding control module is used for receiving the operation task of the man-machine interaction module and guiding the loader to operate by the operation target. Under the system structure, the vehicle data and the vehicle-mounted sensing data are collected in real time through the vehicle-mounted terminal and the vehicle-mounted ring sensing module, the wireless communication module uploads the data to the man-machine interaction module to construct a guide model, a driver sets an operation task and an operation target of the loader through the guide model constructed by the man-machine interaction module, and the operation task and the operation target are issued to the guide control module of the loader to control the loader to guide the operation, so that the operation intensity of the driver is reduced. Meanwhile, the man-machine interaction module collects the loader guiding operation data in real time through the vehicle-mounted terminal and the vehicle-mounted ring sensing module, performs big data analysis on the guiding operation data, builds a digital model according to the guiding operation data, and simultaneously simulates a real loader operation scene by combining big data analysis results, so that the influence of visual field environmental factors on a driver in the guiding operation process is reduced, driving fatigue is relieved, and driving experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a diagram illustrating a loader work guidance system architecture for non-line-of-sight remote control driving, according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a loader job guidance system for non-line-of-sight remote control driving, according to an exemplary embodiment;
in fig. 1: the system comprises a 1-vehicle-mounted terminal, a 2-vehicle-mounted ring sensing module, a 3-wireless communication module, a 4-man-machine interaction module and a 5-guiding control module.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, based on the examples herein, which are within the scope of the protection sought by those of ordinary skill in the art without undue effort, are intended to be encompassed by the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating a loader operation guidance system architecture for non-line-of-sight remote control driving according to an exemplary embodiment, as shown in fig. 1, the system includes: the system comprises a vehicle-mounted terminal 1, a vehicle-mounted environmental sensing module 2, a wireless communication module 3, a man-machine interaction module 4 and a guiding control module 5. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the vehicle-mounted terminal 1 is used for collecting vehicle data of the loader; the vehicle-mounted ring sensing module 2 is used for collecting vehicle-mounted sensor data of the loader; the wireless communication module 3 is used for carrying out networking wireless communication with the man-machine interaction module 4 of the cockpit, and the wireless communication module 3 sends the vehicle-mounted sensor data and the vehicle data to the man-machine interaction module 4 and receives the operation task and the operation target of the man-machine interaction module 4; the man-machine interaction module 4 is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data, setting operation tasks and operation targets, collecting loader guide operation data, analyzing the guide operation data in large data, establishing a digital model according to the guide operation data, simulating a real loader operation scene according to a large data analysis result, and performing operation evaluation on a guide operation process according to the digital model; the guiding control module 5 is used for receiving the operation task of the man-machine interaction module 4 and guiding the loader to operate by the operation target.
Further, in one embodiment, the front-end device of the existing vehicle monitoring management system is adopted by the vehicle-mounted terminal 1, and the front-end device has multiple functions of integrated positioning, communication, an automobile running recorder, security alarm, wire cutting alarm, remote safety fuel cut-off, power-off safety protection and the like. The guidance control module 5 may implement guidance control of the loader using an existing PLC controller.
Further, in one embodiment, the in-vehicle ring sensor module 2 includes an inclinometer, an angle encoder, a lidar, an Inertial Measurement Unit (IMU), an AI camera, a weighing valve block, and a positioning module (GPS/RTK).
Wherein, the liquid crystal display device comprises a liquid crystal display device,
the inclinometer is provided with two inclinometers which are respectively arranged at the movable arm and the rotating bucket of the loader and used for collecting the attitude information of the movable arm and the rotating bucket;
the angle encoder is arranged at the hinged position of the loader body and is used for collecting the relative attitude angles of the front and rear loader bodies;
the laser radar is provided with three laser point cloud data, namely a point cloud model, which are respectively arranged at the left side, the right side and the rear of the cab of the loader and are used for collecting laser point cloud data at the left side, the right side and the rear of the loader;
the inertial measurement unit is provided with one and is arranged at the position of a headlight of a front vehicle body of the loader and used for collecting vehicle posture information of the loader;
the four AI cameras are respectively arranged on the left side, the front side, the right side and the rear side above the cab of the loader and used for collecting 360-degree around-view image data around the loader, and the functions of classifying and identifying pedestrians and objects are realized through a deep learning artificial target detection algorithm, so that the visual field range of a driver is widened, and the influence of visual field range factors on the operation of the driver is avoided.
The weighing valve block is arranged on a hydraulic oil path of the loader tool and used for collecting the weight of loading and unloading materials of the loader tool;
the positioning module is arranged at the top of the cab of the loader and used for collecting positioning data of the loader in an open scene.
The manual detection algorithm adopted by the AI camera is realized by the existing target object recognition algorithm, such as a YOLOV5 algorithm based on a deep learning framework, the YOLOV5 algorithm uses a PyTorch framework, and has good speed for model training and good recognition speed and accuracy in the process of detecting targets of pedestrians and vehicles. In addition, fast R-CNN algorithm, R-CNN algorithm and SSD algorithm can be adopted to realize pedestrian and object classification recognition function, and the application is not repeated here.
Specifically, the vehicle-mounted sensor data collected by the vehicle-mounted environmental sensor module 2 includes a boom inclination angle, a rotating bucket inclination angle, a hinge rotation angle, laser point cloud data, inertial measurement attitude data, looking-around image data, material weight, positioning data, elevation and heading information.
Further, in one embodiment, the human-computer interaction module 4 is disposed in the remote control cockpit, the module adopts an industrial GUI design style to display various items of data of the loader in front of the driver, and the interaction information of the module includes video display, vehicle posture information, 3D digital visualization, vehicle status information, alarm information and material statistics information. The man-machine interaction module 4 specifically comprises a touch screen, an operation unit, a sensor calibration unit and a task interaction unit. The touch screen is connected with the operation unit and is used for displaying vehicle-mounted sensor data, vehicle data and a working scene of the loader and receiving an operation instruction input by a driver;
the operation unit is respectively connected with the touch screen, the sensor calibration unit and the task interaction unit and is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data by adopting a multiple data composite superposition conversion mode and defining a guide model function;
the sensor calibration unit is used for pre-calibrating the vehicle-mounted ring sensing module 2, and the pre-calibrated information comprises: vehicle basic attribute information, attitude reference point information, obstacle classification information, and guide line scale information;
the task interaction module is used for prefabricating a work task and a work target of the loader according to the guide model function, wherein the work task and the work target comprise a work task type, a work task angle range, a work task duration, a work task grade and an evaluation rule;
the operation unit is also used for collecting and storing the guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating the operation scene of the real loader according to the large data analysis result, and performing operation evaluation on the guide operation process according to the digital model. The arithmetic unit may employ an industrial computer to implement the data processing.
Specifically, the guiding model function in the scheme of the application comprises one-key leveling, one-key lifting, material pile identification, route planning, blind area detection, safety early warning, autonomous loading and unloading and operation statistics.
Wherein, the one-key square flat is obtained by adopting the data coupling operation of the inclination angle of the movable arm and the inclination angle of the rotating bucket. The inclination angle range of the movable arm is preset to be 0-100% of the interval range, and the inclination angle of the rotating bucket is preset to be-50% of the interval range. When the inclination angle of the movable arm and the inclination angle of the rotating bucket are in the range of 0-35%, the inclination angle of the rotating bucket is preferentially adjusted to reach 0% of angle data, and then the inclination angle of the movable arm is adjusted to reach 0% of angle data.
The one-key lifting is obtained by adopting the data coupling operation of the inclination angle of the movable arm and the inclination angle of the rotating bucket. The inclination angle range of the movable arm is preset to be 0-100% of the interval range, and the inclination angle of the rotating bucket is preset to be-50% of the interval range. When the inclination angle of the movable arm is gradually lifted from 0% to 50% in a section, the inclination angle of the rotating bucket is kept unchanged by 0%. When the inclination angle of the movable arm is gradually lifted to a 100% interval from 50%, the inclination angle of the rotating bucket is gradually adjusted to reach 50%.
One-key lifting and one-key leveling data coupling operation principle: the method comprises the steps of presetting a boom inclination angle range and a rotating bucket inclination angle range in advance, calibrating and storing a series of position points in the range, and alternately controlling a bucket and a large arm of the loader to the corresponding position points.
The material pile identification adopts a deep learning neural network method to identify the shape, gradient and height of the material pile. The identification process adopts image preprocessing, semantic segmentation, image classification, labeling and identification. The shape, gradient and pile height data of the material pile are obtained through image calibration and double check of laser point cloud data. For the deep learning neural network method, the existing deep learning method is adopted to realize the identification, classification and labeling of the stockpile images. The deep learning method includes two types of algorithms: twostage algorithm (e.g., R-CNN series) and onestage algorithm (e.g., yolo, SSD, etc.). The main difference between the two is that the twostage algorithm requires that a pro-osal (a pre-selected box possibly containing the object to be inspected) is generated first, followed by fine-grained object detection. Whereas the onestar algorithm will directly extract features in the network to predict object classification and location. The core of the region extraction algorithm in the two-order algorithm is a convolutional neural network CNN, features are extracted by utilizing CNN backbones, candidate regions are found out, and finally a sliding window determines the target category and the target position. The first-order algorithm performs feature extraction, target classification and position regression in the whole convolution network, obtains the target position and the target category through one-time reverse calculation, and greatly improves the speed on the premise that the recognition accuracy is slightly weaker than that of the two-stage target detection algorithm. The scheme mainly adopts a YOLOV4 algorithm in a first-order algorithm to identify the shape, gradient and height of the material pile.
And the route planning adopts the design of autonomous movement planning route by sensing the calculation result, the route is displayed on a man-machine interaction main page of the touch screen, and the route is output in a green guide line mode. According to the path planning method, an existing DQN algorithm is adopted to achieve an autonomous motion planning route, the DQN algorithm is combined with vehicle-mounted environmental-friendly data to perform reinforcement learning and training of a neural network, training parameters of the network are obtained through training and optimization of a neural network model, and relatively accurate path output is obtained.
The blind area detection adopts the laser radar point cloud data clustering extraction and identification process, so that the safety problem of the blind area except the forward direction can be ensured. The point cloud clustering process in the scheme is realized by adopting the existing point cloud clustering algorithm, such as kmeans clustering algorithm, DBSCAN algorithm, euclidean algorithm and the like. In the point cloud data of the laser radar, the distance between two points in the point cloud cluster of the same object is smaller than a certain value, and the distance between the point cloud clusters of different objects is larger than a certain value. The Euclidean clustering algorithm combines the points with Euclidean distances smaller than a set distance threshold into one class according to the principle, thereby completing the clustering process.
The safety early warning adopts the image of the AI camera and the laser radar data to carry out the composite detection and identification process, and outputs the detection result in real time, so that the pedestrian and obstacle detection and autonomous safety early warning can be realized, and the safety intervention is actively carried out if necessary, so that the construction operation safety is ensured. The method mainly comprises the step of carrying out safety early warning when the condition triggering the safety early warning rule exists in the image and radar data by setting the corresponding safety early warning rule.
The autonomous loading and unloading is realized by adopting one-key squaring, one-key lifting, stock pile identification, route planning, blind area detection and safety pre-warning comprehensive treatment, guiding autonomous driving to loading and unloading points through a planned route according to stock pile identification, and performing one-key squaring, autonomous loading and one-key lifting operation actions. The blind area detection ensures the safety boundary of the moving vehicle, and the safety early warning ensures that the vehicle can timely make adjustment and avoidance actions when dangerous information exists.
The operation statistics adopts material weighing counting feedback, operation habit, operation duration and operation shift, and provides driving habit and work efficiency evaluation by combining background big data.
Further, in one embodiment, the collecting loader guiding operation data and performing big data analysis on the guiding operation data specifically includes: in the guiding operation process of the loader, the human-computer interaction module 4 collects real-time vehicle-mounted sensor data by utilizing the vehicle-mounted ring sensing module 2, collects driver operation records by a touch screen, and establishes a time sequence database to store the vehicle-mounted sensor data and the driver operation records; and then, data mining is carried out on the vehicle-mounted sensor data and the driver operation records stored in the time sequence database by utilizing a data mining algorithm, a corresponding relation sequence between the driver operation records and the running state of the loader vehicle is established, the operation habit of the best quality driver and the optimal running state of the vehicle can be effectively analyzed through long-time accumulation, and references are provided for the subsequent combination of the vehicle body and the sensor data, the autonomous control of the engine work rhythm, the reduction of fuel consumption and the improvement of the machine work efficiency.
Further, in one embodiment, virtual modeling is a process of index model creation. Through the establishment of the model and the analysis result of big data, the working site state can be clearly displayed, the running state of the vehicle can be early warned in time, and the maintenance information and the peripheral working environment in the working process can be predicted in advance. The digital model adopts a three-dimensional map with a movable arm inclination angle, a rotating bucket inclination angle, a hinge rotation angle, laser point cloud data, IMU (inertial measurement) attitude data, camera video detection, material quality, GPS/RTK information and a working face to construct a 3D digital twin visualization platform for restoring a construction operation site and simulating a real operation scene. The constructing the digital model according to the guiding operation data specifically comprises the following steps: and 3D digital twin visual digital model is constructed on the 3D map by utilizing the movable arm inclination angle, the rotating bucket inclination angle, the hinging rotation angle, the laser point cloud data, the inertial measurement attitude data, the looking-around image data, the material weight and the positioning data in the vehicle-mounted sensor data and combining the high-definition three-dimensional map of the working surface of the loader to perform three-dimensional coordinate conversion.
The digital model of the scheme utilizes laser point cloud data registration, wherein the registration comprises time registration and space registration, and the time registration is realized by a method of combining software and hardware, wherein a unified GPS time system is mainly adopted; spatial registration is mainly performed by three-dimensional coordinates (X, Y, Z), laser reflection intensity (intel), and color information (RGB) of the target object surface points, and is converted into a model by three-dimensional coordinates using position and attitude data provided by GPS/RTK and IMU (inertial measurement). According to the scheme, the sensor is used for collecting data such as the inclination angle of the movable arm, the inclination angle of the rotating bucket, the hinge rotation angle and the material quality, and the real vehicle collecting data is imported into the digital model for analysis. The GIS technology and the digital twin are fused together to manufacture a high-precision 3D map covering the whole domain, so that three-dimensional visual management of construction scenes, facilities and positioning building construction is realized. Meanwhile, by utilizing the high-precision positioning capability, the vehicle carrying the high-precision positioning equipment presents the travelling route and the position information on the 3D map. According to the real-time position of the vehicle and the quality of materials, operators can effectively match the transportation requirement of loading and unloading points in real time as required, restore the construction operation site, simulate the real operation scene and efficiently and safely finish the operation task.
Further, in one embodiment, the process of setting the job task and the job target specifically includes: setting a job task type, a job task angle range, a job task duration, a job task grade and an evaluation rule. The task types can be divided into a flat field, material pile collection, material turnover and material loading and unloading; the duration of the operation task can be selected from 10 minutes, 15 minutes, 30 minutes, 45 minutes and 60 minutes; the operation task level can be selected from a first level, a second level and a third level (the first level is the highest, and constraint conditions are more severe); the evaluation rule may be a manual evaluation or an automatic evaluation. The set job task may be a plurality of superimposed tasks, constituting a set of job tasks.
Further, in one embodiment, job assessment refers to assessing compliance of a loader operation process. The operation evaluation system is established or corresponding operation evaluation rules are set, so that a high-quality driver can be conveniently screened, the optimal working state of the loader-digger is monitored in real time, and an operation quality feedback basis is provided.
Further, in one embodiment, the wireless communication module 3 is installed at the top of the cab of the loader, and establishes a communication mechanism for wireless communication by adopting a TCP/IP protocol; the wireless communication module 3 is a broadband ad hoc network device or a 4G/5G wireless router.
The man-machine interaction module 4 sends down the task and the operation target of the operation through the wireless communication module 3, which means that the task or the task set file which is completed through the editing of the cockpit end, and the byte sequence of the data transmission is carried out through a fixed protocol. The communication mode of issuing task adopts TCP communication rule and contains encryption setting means.
Referring to fig. 2, the workflow of the loader operation guidance system according to the present application specifically includes: installing corresponding hardware units (sensors, vehicle-mounted terminals 1 and the like) on the engineering vehicle; acquiring hardware unit data and establishing communication; establishing a guide model according to the hardware unit data; setting a job task and a target; issuing a task and conducting guiding operation; collecting a job data set and analyzing big data; virtual modeling is carried out according to the operation data; and performing operation evaluation according to the digital model.
According to the scheme, the cockpit is used as an operation object, and the plurality of sensors are additionally arranged on the loader, so that the real-time monitoring of the vehicle attitude information of the loader, the acquisition of the surrounding environment information of the vehicle and the safety early warning and control output of the vehicle function are achieved. By introducing the loader operation guiding system, the operation efficiency is optimized, the safety of the operation vehicle is kept, and the richness of operation is improved.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality", "multiple" means at least two.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly on the other element or intervening elements may also be present; when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present, and further, as used herein, connection may comprise a wireless connection; the use of the term "and/or" includes any and all combinations of one or more of the associated listed items.
Any process or method description in a flowchart or otherwise described herein may be understood as: means, segments, or portions of code representing executable instructions including one or more steps for implementing specific logical functions or processes are included in the preferred embodiments of the present application, in which functions may be executed out of order from that shown or discussed, including in a substantially simultaneous manner or in an inverse order, depending upon the functionality involved, as would be understood by those skilled in the art to which the embodiments of the present application pertains.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (8)

1. A loader work guidance system for non-line-of-sight remote control driving, comprising:
the vehicle-mounted terminal is used for collecting vehicle data of the loader;
the vehicle-mounted ring sensing module is used for collecting vehicle-mounted sensor data of the loader;
the wireless communication module is used for carrying out networking wireless communication with the man-machine interaction module of the cockpit, and the wireless communication module sends the vehicle-mounted sensor data and the vehicle data to the man-machine interaction module and receives the operation task and the operation target of the man-machine interaction module;
the man-machine interaction module is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data, setting an operation task and an operation target, collecting guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating an operation scene of the real loader according to a large data analysis result, and performing operation evaluation on the guide operation process according to the digital model;
and the guiding control module is used for receiving the operation task of the man-machine interaction module and guiding the loader to operate by the operation target.
2. The loader operation guidance system for non-line-of-sight remote control driving according to claim 1, wherein the man-machine interaction module specifically comprises a touch screen, an operation unit, a sensor calibration unit and a task interaction unit;
the touch screen is connected with the operation unit and used for displaying vehicle-mounted sensor data, vehicle data and a working scene of the loader and receiving an operation instruction input by a driver;
the operation unit is respectively connected with the touch screen, the sensor calibration unit and the task interaction unit, and is used for establishing a guide model based on vehicle-mounted sensor data and vehicle data by adopting a multiple data composite superposition conversion mode and defining a guide model function;
the sensor calibration unit is used for pre-calibrating the vehicle-mounted ring sensor module, and the pre-calibrated information comprises: vehicle basic attribute information, attitude reference point information, obstacle classification information, and guide line scale information;
the task interaction module is used for prefabricating a work task and a work target of the loader according to the guide model function, wherein the work task and the work target comprise a work task type, a work task angle range, a work task duration, a work task grade and an evaluation rule;
the operation unit is also used for collecting and storing the guide operation data of the loader, analyzing the guide operation data in large data, constructing a digital model according to the guide operation data, simulating the operation scene of the real loader according to the large data analysis result, and performing operation evaluation on the guide operation process according to the digital model.
3. The loader operation guidance system for non-line-of-sight remote control driving of claim 1, wherein the vehicle-mounted environmental sensor module comprises an inclinometer, an angle encoder, a laser radar, an inertial measurement unit, an AI camera, a weighing valve block, and a positioning module;
the inclinometer is provided with two inclinometers which are respectively arranged at the movable arm and the rotating bucket of the loader and used for collecting the attitude information of the movable arm and the rotating bucket;
the angle encoder is arranged at the hinged position of the loader body and used for collecting the relative attitude angles of the front and rear loader bodies;
the laser radar is provided with three laser point cloud data which are respectively arranged at the left side, the right side and the rear of the cab of the loader and are used for collecting the laser point cloud data at the left side, the right side and the rear of the loader;
the inertial measurement unit is arranged at the position of a headlight of a front vehicle body of the loader and is used for collecting vehicle posture information of the loader;
the AI cameras are respectively arranged on the left side, the front side, the right side and the rear side above the cab of the loader and are used for collecting 360-degree looking-around image data around the loader;
the weighing valve block is arranged on a hydraulic oil path of the loader tool and used for collecting the weight of loading and unloading materials of the loader tool;
the positioning module is arranged at the top of the cab of the loader and used for collecting positioning data of the loader in an open scene.
4. The loader operation guidance system for non-line-of-sight remote control driving of claim 1, wherein the vehicle-mounted sensor data includes boom inclination, turret inclination, articulation angle, laser point cloud data, inertial measurement attitude data, look-around image data, material weight, positioning data, elevation and heading information; the vehicle data includes engine speed, ignition status, turn signal status, high and low beam status, park status, gear status, wiper status, horn status, cooling fan status, and gearbox power status.
5. The loader operation guidance system for non-line-of-sight remote control driving of claim 1, wherein the wireless communication module is installed on top of a loader cab and establishes a communication mechanism for wireless communication using a TCP/IP protocol; the wireless communication module is broadband ad hoc network equipment or a 4G/5G wireless router.
6. The loader operation guidance system for non-line-of-sight remote control driving of claim 2, wherein the collecting loader guidance operation data and performing big data analysis on the guidance operation data specifically comprises: in the guiding operation process of the loader, the human-computer interaction module collects real-time vehicle-mounted sensor data by using the vehicle-mounted ring sensing module, collects driver operation records by using the touch screen, and establishes a time sequence database to store the vehicle-mounted sensor data and the driver operation records;
and performing data mining on the vehicle-mounted sensor data and the driver operation records stored in the time sequence database by using a data mining algorithm, and establishing a corresponding relation sequence between the driver operation records and the running state of the loader vehicle.
7. The loader operation guidance system for non-line-of-sight remote control driving according to claim 2, wherein said constructing a digital model from the guidance operation data specifically comprises:
and 3D digital twin visual digital model is constructed on the 3D map by utilizing the movable arm inclination angle, the rotating bucket inclination angle, the hinging rotation angle, the laser point cloud data, the inertial measurement attitude data, the looking-around image data, the material weight and the positioning data in the vehicle-mounted sensor data and combining the high-definition three-dimensional map of the working surface of the loader to perform three-dimensional coordinate conversion.
8. The loader operation guidance system for non-line-of-sight remote control driving of claim 2, wherein the guidance model functions include one-key squaring, one-key lifting, stockpile identification, route planning, blind zone detection, safety precautions, autonomous handling, and operation statistics.
CN202211609915.3A 2022-12-12 2022-12-12 Loader operation guiding system for non-line-of-sight remote control driving Pending CN116321053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211609915.3A CN116321053A (en) 2022-12-12 2022-12-12 Loader operation guiding system for non-line-of-sight remote control driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211609915.3A CN116321053A (en) 2022-12-12 2022-12-12 Loader operation guiding system for non-line-of-sight remote control driving

Publications (1)

Publication Number Publication Date
CN116321053A true CN116321053A (en) 2023-06-23

Family

ID=86791224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211609915.3A Pending CN116321053A (en) 2022-12-12 2022-12-12 Loader operation guiding system for non-line-of-sight remote control driving

Country Status (1)

Country Link
CN (1) CN116321053A (en)

Similar Documents

Publication Publication Date Title
US11691648B2 (en) Drivable surface identification techniques
US11709495B2 (en) Systems and methods for transfer of material using autonomous machines with reinforcement learning and visual servo control
US11560690B2 (en) Techniques for kinematic and dynamic behavior estimation in autonomous vehicles
US20200394813A1 (en) Techniques for volumetric estimation
US11567197B2 (en) Automated object detection in a dusty environment
CN105009175A (en) Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
US11713059B2 (en) Autonomous control of heavy equipment and vehicles using task hierarchies
CN107798305A (en) Detect lane markings
US20160353049A1 (en) Method and System for Displaying a Projected Path for a Machine
US20220146277A1 (en) Architecture for map change detection in autonomous vehicles
CN115223039A (en) Robot semi-autonomous control method and system for complex environment
CN111444891A (en) Unmanned rolling machine operation scene perception system and method based on airborne vision
DE112022001861T5 (en) MOTION CONSISTENCY MEASUREMENT FOR THE OPERATION OF AN AUTONOMOUS VEHICLE
Guenther et al. Collision avoidance and operator guidance innovating mine vehicle safety
CN116321053A (en) Loader operation guiding system for non-line-of-sight remote control driving
CN116578081A (en) Unmanned transport vehicle pointing and stopping method based on perception
CN116022657A (en) Path planning method and device and crane
CN115755888A (en) AGV obstacle detection system with multi-sensor data fusion and obstacle avoidance method
CN113002540B (en) Mining dump truck control method and device
CN212873508U (en) Unmanned rolling machine operation scene perception system based on airborne vision
JP7506876B2 (en) Techniques for motion and dynamic behavior estimation in autonomous vehicles
US20240185719A1 (en) Systems and methods for detecting false positives in collision notifications
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving
JP7175245B2 (en) working machine
CN116398134A (en) Coal cutter cutting planning method and equipment based on inspection and perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination