CN112965504B - Remote confirmation method, device and equipment based on automatic driving and storage medium - Google Patents

Remote confirmation method, device and equipment based on automatic driving and storage medium Download PDF

Info

Publication number
CN112965504B
CN112965504B CN202110487438.7A CN202110487438A CN112965504B CN 112965504 B CN112965504 B CN 112965504B CN 202110487438 A CN202110487438 A CN 202110487438A CN 112965504 B CN112965504 B CN 112965504B
Authority
CN
China
Prior art keywords
remote
driving
monitoring
unmanned automobile
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110487438.7A
Other languages
Chinese (zh)
Other versions
CN112965504A (en
Inventor
熊禹
周君武
梁国全
罗文�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Liuzhou Motor Co Ltd
Original Assignee
Dongfeng Liuzhou Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Liuzhou Motor Co Ltd filed Critical Dongfeng Liuzhou Motor Co Ltd
Publication of CN112965504A publication Critical patent/CN112965504A/en
Application granted granted Critical
Publication of CN112965504B publication Critical patent/CN112965504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Selective Calling Equipment (AREA)
  • Studio Devices (AREA)
  • Combined Controls Of Internal Combustion Engines (AREA)
  • Steering Control In Accordance With Driving Conditions (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The invention belongs to the technical field of automobiles, and discloses a remote confirmation method, a remote confirmation device, remote confirmation equipment and a storage medium based on automatic driving. The method comprises the steps that a remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a remote monitoring platform client mode, and the remote monitoring platform client mode is that an unmanned automobile judges a driving action to be executed currently according to vehicle driving monitoring data; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform and entering a client mode of the remote monitoring platform; and if the parking instruction is not met, the remote control platform sends a parking instruction to the unmanned automobile so that the unmanned automobile executes parking operation according to the parking instruction. The remote control platform ensures that remote personnel are in an online safety monitoring state through the mode, and the remote personnel can monitor the vehicle through the client of the remote monitoring platform, so that the influence on the running safety of the vehicle when the safety monitoring of the remote personnel is lost is avoided.

Description

Remote confirmation method, device and equipment based on automatic driving and storage medium
The present invention claims priority from chinese patent application entitled "method of controlling an unmanned vehicle, and storage medium" filed by the chinese patent office on 15/05/2020, application number 202010416571.9, the entire contents of which are incorporated herein by reference.
Technical Field
The invention relates to the technical field of automobiles, in particular to a remote confirmation method, a remote confirmation device, remote confirmation equipment and a storage medium based on automatic driving.
Background
At present, when an automatic driving vehicle is under remote control, in an environment of an unreal vehicle, a control person carries out remote control according to road conditions and the driving state of the vehicle, so that the control safety of unmanned automobile driving is improved, but simultaneously, a remote monitoring system also faces a severe safety problem, and the problem that the driving safety of the vehicle is influenced or even the public safety is endangered can be caused when safety monitoring is lost.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a remote confirmation method, a device, equipment and a storage medium based on automatic driving, and aims to solve the technical problem that the existing remote monitoring system can influence the driving safety of an automobile and even endanger public safety when the safety monitoring is lost.
In order to achieve the above object, the present invention provides an automatic driving-based remote confirmation method, including the steps of:
the remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a remote monitoring platform client mode, the remote monitoring platform client mode acquires vehicle driving monitoring data for the unmanned automobile, and driving actions to be executed at present are judged according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation;
if the parking instruction is not satisfied, the remote control platform sends a parking instruction to the unmanned automobile by using 5G, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
Optionally, the determining, by the remote control platform, whether the requirement of online security monitoring of the remote personnel is met in a client mode of the remote monitoring platform includes:
the remote control platform acquires attitude information of a video monitoring object within a period time in a remote monitoring platform client mode;
determining attitude change information of the video monitoring object through the attitude information based on a characterization model of a human body target;
judging whether the video monitoring object meets a preset monitoring posture or not according to the posture change information so as to obtain a posture judgment result;
and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result.
Optionally, before the determining, by the human body target-based characterization model, the posture change information of the video monitoring object according to the posture information, the method further includes:
constructing a human body target detection algorithm based on the head-shoulder convolution characteristics according to a preset region full convolution neural network;
and determining a human body head-shoulder model through the human body target detection algorithm based on the head-shoulder convolution characteristics, and taking the human body head-shoulder model as a representation model of a human body target in a video monitoring scene.
Optionally, the determining, by the human target-based representation model, pose change information of the video monitoring object according to the pose information includes:
determining a gradient histogram feature vector set and a color feature vector set through the posture information based on a human body target representation model;
generating target fusion characteristics according to the gradient histogram characteristic vector set and the color characteristic vector set through a preset principal component analysis algorithm;
classifying the target fusion characteristics according to a support vector machine classifier to obtain a classification result;
and generating the attitude change information of the video monitoring object according to the classification result.
Optionally, the determining, by the human target-based characterization model, a gradient histogram feature vector set and a color feature vector set according to the pose information includes:
determining a direction gradient histogram model and a hexagonal pyramid color model based on the characterization model of the human body target;
determining a gradient histogram feature vector set through the attitude information according to the direction gradient histogram model;
determining a posture image according to the hexagonal pyramid color model through the posture information, and dividing the posture image into a plurality of image blocks;
and determining the color mean value of the pixel points in the image block, and generating a color feature vector set according to the color mean value.
Optionally, the determining a gradient histogram feature vector set according to the direction gradient histogram model through the attitude information includes:
determining an attitude image according to the direction gradient histogram model through the attitude information, and converting the attitude image into a gray image;
determining the gray value of a pixel point in the gray image, and determining the gradient amplitude and the gradient direction corresponding to the pixel point according to the gray value;
dividing the gray level image into a plurality of cells, and determining a histogram of gradient directions of the cells according to the gradient amplitude and the gradient directions;
selecting a preset number of histograms of the gradient directions, and generating a target image block according to the preset number of histograms of the gradient directions;
and normalizing the histogram of the gradient direction in the target image block to obtain a plurality of gradient histogram feature vectors, and generating a gradient histogram feature vector set according to the gradient histogram feature vectors.
Optionally, the generating a target fusion feature according to the gradient histogram feature vector set and the color feature vector set by using a preset principal component analysis algorithm includes:
acquiring a gradient histogram feature vector according to the gradient histogram feature vector set, and acquiring a color feature vector according to the color feature vector set;
determining an original feature matrix according to the gradient histogram feature vector and the color feature vector;
determining a covariance matrix of the original feature matrix, and determining an eigenvalue and an eigenvector matrix of the covariance matrix;
determining a transformation feature matrix of the principal component according to the feature value and the feature vector matrix through a preset principal component analysis algorithm;
and fusing according to the original feature matrix and the transformed feature matrix to obtain a fused feature matrix, and determining target fusion features according to the fused feature matrix.
In addition, in order to achieve the above object, the present invention also provides an automatic driving-based remote confirmation apparatus, including:
the remote monitoring platform client mode is used for acquiring vehicle driving monitoring data for the unmanned automobile and judging the driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation;
and the sending module is used for sending a parking instruction to the unmanned automobile by using 5G if the parking instruction is not met, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
Further, to achieve the above object, the present invention also proposes an automatic driving-based remote confirmation apparatus including: a memory, a processor, and an autonomous driving based remote confirmation program stored on the memory and executable on the processor, the autonomous driving based remote confirmation program configured to implement an autonomous driving based remote confirmation method as described above.
In addition, in order to achieve the above object, the present invention further proposes a storage medium having stored thereon an automatic driving-based remote confirmation program that, when executed by a processor, implements the automatic driving-based remote confirmation method as described above.
The remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a remote monitoring platform client mode, the remote monitoring platform client mode acquires vehicle driving monitoring data for an unmanned automobile, and driving actions to be executed at present are judged according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation; if the parking instruction is not satisfied, the remote control platform sends a parking instruction to the unmanned automobile by using 5G, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction. In the invention, the remote control platform ensures that the remote personnel are in an online safety monitoring state through the mode, and the remote personnel can visually monitor the driving environment and the driving state of the vehicle through the remote monitoring platform client respectively, so that the control accuracy of the vehicle is improved, and the technical problem that the driving safety of the vehicle is influenced or even the public safety is endangered when the existing remote monitoring system loses the safety monitoring is solved.
Drawings
FIG. 1 is a schematic diagram of an autopilot-based remote validation device for a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a first embodiment of an autopilot-based remote confirmation method of the present invention;
FIG. 3 is a schematic flow chart illustrating a second embodiment of an autopilot-based remote confirmation method of the present invention;
fig. 4 is a block diagram illustrating a first embodiment of an automatic driving-based remote confirmation apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an automatic driving-based remote confirmation device for a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the automatic driving-based remote confirmation apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of an automatic driving-based remote confirmation apparatus, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and an automatic driving-based remote confirmation program.
In the automatic driving-based remote confirmation apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the remote confirmation device based on automatic driving according to the present invention may be provided in the remote confirmation device based on automatic driving, which calls the remote confirmation program based on automatic driving stored in the memory 1005 through the processor 1001 and performs the remote confirmation method based on automatic driving according to the embodiment of the present invention.
An embodiment of the present invention provides a remote confirmation method based on automatic driving, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a remote confirmation method based on automatic driving according to the present invention.
In this embodiment, the remote confirmation method based on automatic driving includes the following steps:
step S10: the remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a remote monitoring platform client mode, the remote monitoring platform client mode acquires vehicle driving monitoring data for the unmanned automobile, and driving actions to be executed at present are judged according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; and if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a client mode of the remote monitoring platform through the remote driving control operation.
It should be noted that the execution subject of the embodiment is an automatic driving-based remote confirmation device, where the automatic driving-based remote confirmation device may be a remote control platform or the like, or may also be another device that can implement the same or similar functions, and this embodiment is not limited thereto. The present embodiment is described by taking a remote control platform as an example.
It is easy to understand that, remote personnel can respectively carry out visual monitoring on the driving environment and the state of the vehicle through a remote monitoring platform client or a video monitoring module in a mobile phone APP. In this embodiment, a remote person performs visual monitoring on the driving environment and state of a vehicle in a client mode of a remote monitoring platform through the remote control platform. Based on the 5G remote control driving technology, when the unmanned automobile is in a remote control driving mode, the control authority is taken over by the full authority of a remote control platform, signals and images acquired by a vehicle sensor are transmitted to a remote platform, the information such as the images is displayed in a large monitoring screen or a simulated cab in real time after the information is processed by the platform, a remote driver performs remote control on the unmanned automobile in the simulated cab according to the signals and the images, control signals such as a steering wheel, an accelerator and pedals output by the remote driver are transmitted back to a vehicle end controller of the unmanned automobile in a 4G/5G mode and the like, the vehicle end controller performs recognition and execution distribution of the intention of the driver, and each execution system performs the intention execution. However, the remote control system has the problems that the driving state of the large-screen unmanned automobile cannot be monitored and monitored due to the fact that a remote driver leaves sight or fingers leave for operation, and the like, and the control of the unmanned automobile is not timely.
Specifically, in order to improve the accuracy of remote monitoring of the unmanned vehicle, whether a remote driver monitors a large monitoring screen needs to be detected. Wherein, the mode of judging whether to satisfy the requirement of the online safety monitoring of remote personnel can adopt the multiple, and this embodiment explains with the judgement mode that detects remote driver's attitude information: the remote control platform acquires attitude information of a video monitoring object within a period time in a remote monitoring platform client mode; determining attitude change information of the video monitoring object through the attitude information based on a characterization model of a human body target; judging whether the video monitoring object meets a preset monitoring posture or not according to the posture change information so as to obtain a posture judgment result; and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result. The video monitoring object is a remote driver, the cycle time may be set to 1min according to the actual situation, and the specific cycle time is not limited in this embodiment.
It should be understood that, before determining the posture change information of the video monitoring object through the posture information based on the human body target characterization model, a human body target characterization model needs to be constructed, which may be constructed according to a plurality of human body features, and this embodiment is described with the human body target characterization model constructed according to the human body head-shoulder features, and specifically includes: constructing a human body target detection algorithm based on the head-shoulder convolution characteristics according to a preset region full convolution neural network; and determining a human body head-shoulder model through the human body target detection algorithm based on the head-shoulder convolution characteristics, and taking the human body head-shoulder model as a representation model of a human body target in a video monitoring scene. In this embodiment, compared with the human body whole-body model, the human body head-shoulder model constructed in this embodiment has better robustness to the posture change of the remote driver, and the probability that the target of the remote driver is shielded can be reduced to a certain extent by adopting the human body head-shoulder model to recognize the posture of the remote driver.
It should be noted that the remote monitoring platform client mode acquires vehicle driving monitoring data for the unmanned vehicle, and judges a driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation; when the automatic driving function of the unmanned automobile is started, vehicle driving monitoring data of the unmanned automobile are obtained, and the driving action to be executed at present is judged according to the vehicle driving monitoring data. The vehicle driving monitoring data may include: communication conditions, road condition information, driving speed and the like; the communication conditions comprise 5G communication, GPS or Beidou satellite signals and the like; the road condition information comprises information such as lane lines, traffic signs, traffic participants and barriers; the driving speed refers to the driving speed set by the vehicle, such as the speed per hour when the vehicle is automatically driven is not higher than 10 km/h; and the speed per hour is not higher than 5km/h when turning.
It should be understood that after the unmanned function is started, the current driving environment state of the unmanned vehicle is acquired through the vehicle-mounted sensing system, and the acquired data is sent to the vehicle-mounted positioning planning decision control system through the vehicle-mounted Ethernet and other communication modes, wherein the communication modes among the vehicle-mounted units CAN adopt LVDS, USB, CAN bus, WIFI, 5G and other communication modes besides the vehicle-mounted Ethernet; the decision logic judgment of automatic driving is carried out through a decision unit in the vehicle-mounted positioning planning decision control system according to received visual target signals, radar signals, positioning signals, route planning, control commands of a remote monitoring and control system and the like, and driving actions to be executed by the current unmanned vehicle are judged, for example: and judging whether the action to be executed currently is forward movement, left turning, right turning, lane changing or parking according to the received information.
Specifically, the vehicle-mounted perception system can be composed of a vision perception processing system and an ultrasonic radar processing system. The vision perception processing system consists of a panoramic all-round looking system consisting of N high-definition fisheye wide-angle cameras, M high-definition forward-looking cameras and a vision processing controller. The high-definition video images shot by the panoramic looking-around system and the high-definition forward-looking camera are transmitted to the vision processing controller, all the images are processed by the vision processor, a clear view in the front (Q degree visual angle range), the front S range, the lateral W range and the rear L range of the running vehicle is formed, and the clear view is transmitted to the far-end background through the 5G camera. The vision processor performs data processing on the video images and outputs target-level information to the vehicle-mounted positioning and planning decision control system, wherein the vision processor has the functions of lane line identification, traffic sign identification, traffic participant and obstacle identification and the like. The ultrasonic radar processing system consists of 12 ultrasonic radars and a radar controller, acquires the distance information of obstacles of a running vehicle, and outputs the distance position information of a target object to the vehicle-mounted positioning planning decision control system after processing.
And if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing the automatic driving operation of the unmanned automobile according to the driving operation instruction. In this embodiment, the first driving action refers to a precise driving action, such as a steering wheel, an accelerator, and a brake. And when the decision unit judges that the current precise driving action is required to be executed, automatically receiving a driving operation instruction of the unmanned automobile, and executing the automatic driving operation of the unmanned automobile according to the driving operation instruction. For example, the current brake action is to be executed, a brake driving operation instruction automatically sent by a vehicle-mounted positioning planning decision control system is received, and the unmanned automobile executes the brake operation according to the brake driving operation instruction. The vehicle-mounted positioning planning decision control system mainly comprises a positioning module and a planning decision module; and the positioning module receives a high-definition map positioning signal as main positioning information, and is connected with a positioning signal of the 5G base station and a peripheral environment signal of the vision processing system in parallel to perform comprehensive auxiliary positioning correction.
Further, if the first driving action is to be executed currently, receiving a driving operation command of the unmanned vehicle, and executing the automatic driving operation of the unmanned vehicle according to the driving operation command further includes: generating a corresponding control command according to the first driving operation; and responding to the control command, and executing automatic driving operation of the unmanned automobile. In the embodiment, the precise driving action to be executed at present is judged through a decision unit, and a corresponding control instruction is generated according to the precise driving action; and responding to the control command through the vehicle-mounted execution system, and executing the automatic driving operation of the unmanned automobile. Specifically, the execution system receives control commands of a target vehicle speed, a target driving torque, a target braking torque, a target gear, a target steering angle, a steering angular speed and the like sent by the vehicle-mounted positioning planning decision control system, responds the control commands in real time, and returns related control results. For example: the current speed reduction operation is executed, and a vehicle-mounted positioning planning decision control system sends out a control command for reducing the vehicle speed to 9km/h so as to enable the unmanned vehicle to adjust the current vehicle speed to 9 km/h. The executing system consists of a power output and transmission control system, a brake control system, a steering control system and the like of the vehicle.
And if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction. In this embodiment, the second driving action refers to a non-precise driving action, such as an action of starting, stopping, and the like, wherein the action of starting, stopping, and the like can be determined by the unmanned vehicle end according to the driving monitoring data; whether dangerous condition appears in current unmanned vehicle of remote control platform customer end or cell-phone APP etc. monitoring through remote control platform, when dangerous condition appearing, then generate corresponding inaccurate driving action, for example: when a user of the remote control platform monitors that a lane changing vehicle suddenly appears right in front of the current unmanned automobile through the mobile phone APP, emergency stop operation needs to be executed; or when the vehicle runs through the red light on the left side when passing through the intersection, the emergency stop operation needs to be executed.
Because the non-precise driving action is completely executed by the unmanned automobile end, certain operation difficulty exists, and the safety is not high; secondly, some sensors with high specification and precision and the like need to be arranged at the end of the unmanned automobile, so that the cost of the unmanned automobile is increased. Therefore, when non-precise driving actions such as parking and the like are to be executed, driving and parking operation instructions sent by the remote control platform through 5G are automatically received, and the use of high-specification and high-precision sensors such as laser radar is saved through the visual observation of a driver of the remote control platform, so that the overhigh cost of the automobile caused by completely adopting automatic driving is avoided; and the unmanned automobile executes parking operation according to the parking operation instruction. Wherein the remote control platform has the highest priority for the stop command of the vehicle.
Further, if the second driving action is to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction further includes: receiving a driving operation instruction of a second driving action sent by the remote control platform by using 5G, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction; and sending execution result feedback information of the second driving action to the remote control platform so that the remote control platform can determine whether the unmanned automobile completes the driving operation instruction according to the execution result feedback information. In this embodiment, the remote control platform mainly includes a vehicle-mounted 5G communication module, a 5G base station, a 5G core network and area network, a remote monitoring and cloud computing platform, a mobile phone, an APP, and the like. When the remote control platform monitors that the current unmanned automobile needs to execute non-precise driving action, if parking operation is carried out, a driver of the remote control platform sends a parking driving operation instruction to the unmanned automobile end by using 5G; the unmanned automobile executes parking operation according to the parking operation instruction and sends feedback information of an execution result to the remote control platform; and judging whether the current unmanned automobile finishes the parking driving operation instruction or not by a driver of the remote control platform according to the feedback information of the execution result, and if not, retransmitting the parking instruction to the unmanned automobile end.
Step S20: if the parking instruction is not satisfied, the remote control platform sends a parking instruction to the unmanned automobile by using 5G, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
It should be noted that the remote control platform applies a parking instruction sent by 5G to the unmanned vehicle, where the parking instruction is sent when the remote control platform determines that the unmanned vehicle does not satisfy the safe driving environment or the safe driving state in any one of the first mode and the second mode, or when the remote control platform does not satisfy the requirement of the safety monitoring. Wherein, the first mode may refer to a remote monitoring platform client mode; the second mode may refer to a handset APP mode. Specifically, in this embodiment, the remote control platform is used to determine whether the requirement of online safety monitoring of the remote personnel is met in the remote monitoring platform client mode, and if not, the remote control platform sends a parking instruction to the unmanned vehicle by using 5G, so that the unmanned vehicle executes the parking operation of the unmanned vehicle according to the parking instruction for explanation.
It is easy to understand that, the remote personnel can respectively carry out visual monitoring on the driving environment and the driving state of the vehicle through the remote monitoring platform client or the video monitoring module in the mobile phone APP. Remote control driving can be carried out on a remote monitoring client, and remote control sending release and parking instructions can also be carried out on a mobile phone APP through a mobile phone screen. When the mobile phone APP is used for remote control monitoring, an eyeball tracking system needs to be developed on the mobile phone APP, when the mobile phone APP is used for remotely monitoring the running state of a vehicle, the driving state of the unmanned vehicle of a mobile phone screen needs to be monitored through eyes in the whole process, a finger needs to be in continuous contact with the mobile phone screen, the sight line leaves or the finger leaves the screen, and if any condition is met, the APP sends a remote parking command to the unmanned vehicle; when the remote control driving is carried out through the remote monitoring client, a sight tracking system and a remote parking or driving button are also developed on the remote monitoring platform, the vehicle is allowed to automatically drive when the remote parking button is pressed, and the vehicle stops immediately when the button is released. In the remote monitoring and control system, a mobile phone APP mode is preferred, and when the mobile phone APP mode does not work, the mode is switched to a remote monitoring client side mode; both must be operated in a mode otherwise the autonomous vehicle is in a standstill. And secondly, when the remote control platform does not meet the requirement of safety monitoring, a parking instruction is automatically sent to the unmanned automobile end, so that the unmanned automobile executes parking operation according to the parking instruction.
In the embodiment, the remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a client mode of the remote monitoring platform, the client mode of the remote monitoring platform acquires vehicle driving monitoring data for an unmanned automobile, and driving actions to be executed at present are judged according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation; if the parking instruction is not satisfied, the remote control platform sends a parking instruction to the unmanned automobile by using 5G, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction. In this embodiment, the remote control platform ensures that the remote personnel are in the online safety monitoring state through the above-mentioned mode, and the remote personnel can carry out visual monitoring to the driving environment and the driving state of vehicle through the remote monitoring platform client respectively this moment, promotes the control accuracy of vehicle, has solved current remote monitoring system and can cause the technical problem who influences the car driving safety when losing safety monitoring and endanger public safety even.
Referring to fig. 3, fig. 3 is a flowchart illustrating a remote confirmation method based on automatic driving according to a second embodiment of the present invention. Based on the first embodiment, the remote confirmation method based on automatic driving of the present embodiment includes, at step S10:
step S101: the remote control platform acquires attitude information of a video monitoring object within a period time in a remote monitoring platform client mode, acquires vehicle driving monitoring data for an unmanned automobile, and judges a driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; and if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a client mode of the remote monitoring platform through the remote driving control operation.
It is easy to understand that, remote personnel can carry out visual monitoring to the environment of traveling and the state of vehicle through the video monitoring module in remote monitoring platform client or the cell-phone APP respectively, and this remote personnel can be remote driver. In this embodiment, a remote person performs visual monitoring on the driving environment and state of a vehicle in a client mode of a remote monitoring platform through the remote control platform. Based on the 5G remote control driving technology, when the unmanned automobile is in a remote control driving mode, the control authority is taken over by the full authority of a remote control platform, signals and images acquired by a vehicle sensor are transmitted to a remote platform, the information such as the images is displayed in a large monitoring screen or a simulated cab in real time after the information is processed by the platform, a remote driver performs remote control on the unmanned automobile in the simulated cab according to the signals and the images, control signals such as a steering wheel, an accelerator and pedals output by the remote driver are transmitted back to a vehicle end controller of the unmanned automobile in a 4G/5G mode and the like, the vehicle end controller performs recognition and execution distribution of the intention of the driver, and each execution system performs the intention execution. However, the remote control system has the problems that the driving state of the large-screen unmanned automobile cannot be monitored and monitored due to the fact that a remote driver leaves sight or fingers leave for operation, and the like, and the control of the unmanned automobile is not timely.
Specifically, in order to improve the accuracy of remote monitoring of the unmanned vehicle, whether a remote driver monitors a large monitoring screen needs to be detected. Wherein, the mode of judging whether to satisfy the requirement of the online safety monitoring of remote personnel can adopt the multiple, and this embodiment explains with the judgement mode that detects remote driver's attitude information: the remote control platform acquires the attitude information of the video monitoring object within the period time to judge in the remote monitoring platform client mode, and corresponding attitude information can be extracted through the video image of the video monitoring object within the period time shot by the camera. The video monitoring object is a remote driver, the cycle time may be set to 1min according to the actual situation, and the specific cycle time is not limited in this embodiment.
Step S102: and determining the posture change information of the video monitoring object through the posture information based on the characterization model of the human body target.
It should be understood that, before determining the posture change information of the video monitoring object, a human body target characterization model needs to be constructed, which may be constructed according to a plurality of human body features, and this embodiment is described with the human body target characterization model constructed according to the human body head-shoulder features, and specifically includes: constructing a human body target detection algorithm based on the head-shoulder convolution characteristics according to a preset region full convolution neural network; and determining a human body head-shoulder model through the human body target detection algorithm based on the head-shoulder convolution characteristics, and taking the human body head-shoulder model as a representation model of a human body target in a video monitoring scene. In this embodiment, compared with the human body whole-body model, the human body head-shoulder model constructed in this embodiment has better robustness to the posture change of the remote driver, and the probability that the target of the remote driver is shielded can be reduced to a certain extent by adopting the human body head-shoulder model to recognize the posture of the remote driver.
Specifically, due to the complexity of target human behavior detection, a good distinguishing effect is difficult to realize by a single feature, and in order to improve the accuracy of detection of the gesture information by the human target-based characterization model, the recognition degree of the feature is increased by adopting the fusion feature, so that the comprehensiveness, the distinguishability and the reliability of gesture information detection can be ensured. The embodiment is described by a fusion feature combining a gradient histogram feature and a color feature, specifically: determining a gradient histogram feature vector set and a color feature vector set through the posture information based on a human body target representation model; generating target fusion characteristics according to the gradient histogram characteristic vector set and the color characteristic vector set through a preset principal component analysis algorithm; classifying the target fusion characteristics according to a support vector machine classifier to obtain a classification result; and generating the attitude change information of the video monitoring object according to the classification result.
It should be noted that, in order to obtain a fusion feature obtained by combining the gradient histogram feature and the color feature, a gradient histogram feature vector set and a color feature vector set need to be determined through the posture information based on a characterization model of the human target, where the feature model for obtaining the gradient histogram feature vector set may be a direction gradient histogram model, and the direction gradient histogram model may utilize gradient information of image pixels with directions, so as to strongly characterize edge information and contour information of remote personnel of the target. The feature model for obtaining the color feature vector set may be a hexagonal color model HSV. The HSV feature has a strong perceptibility, which enables people to know the color more intuitively, and the present embodiment can adopt the HSV feature as the color feature of the remote personnel. The hexagonal pyramid color model HSV includes three color attributes: hue (Hue), Saturation (Saturation), and brightness (Value). Specifically, the process of obtaining the gradient histogram feature vector set and the color feature vector set may include: determining a direction gradient histogram model and a hexagonal pyramid color model based on the characterization model of the human body target; determining a gradient histogram feature vector set through the attitude information according to the direction gradient histogram model; determining a posture image according to the hexagonal pyramid color model through the posture information, and dividing the posture image into a plurality of image blocks; and determining the color mean value of the pixel points in the image block, and generating a color feature vector set according to the color mean value.
Specifically, determining a gradient histogram feature vector set according to the direction gradient histogram model through the attitude information may include: determining an attitude image according to the direction gradient histogram model through the attitude information, and converting the attitude image into a gray image; determining the gray value of a pixel point in the gray image, and determining the gradient amplitude and the gradient direction corresponding to the pixel point according to the gray value; dividing the gray level image into a plurality of cells, and determining a histogram of gradient directions of the cells according to the gradient amplitude and the gradient directions; selecting a preset number of histograms of the gradient directions, and generating a target image block according to the preset number of histograms of the gradient directions; and normalizing the histogram of the gradient direction in the target image block to obtain a plurality of gradient histogram feature vectors, and generating a gradient histogram feature vector set according to the gradient histogram feature vectors.
It is easy to understand that the idea of the directional gradient histogram model is to calculate the local gradient of the image, and then combine the feature vectors and perform normalization. By calculating the gradient direction histogram of the local area, the directional gradient histogram model can effectively extract the edge information of the remote personnel and the contour information describing the remote personnel, thereby realizing the classification of the target. And the directional gradient histogram model has better robustness to brightness change and small amount of offset. Therefore, the gradient histogram feature vector obtained by the direction gradient histogram model can better describe the features of the remote personnel.
For example, when calculating the gradient histogram feature, the size of the pose image may be unified into 64 × 128 pixels, and divided into 8 × 16 cells with size of 8 × 8 pixels on average. And 4 adjacent cells form a target image block, the dimension of each target image block is 36, and each target image block contains 9 x 4 vectors with 36 dimensions in total, namely local information of an image is described by the 36-dimensional vectors in one target image block. And detecting the image by adopting a sliding window, wherein the step length is 8 pixels of the side length of the cell, a total of 105 overlapped target image blocks can be acquired by 7 × 15, and the dimension of the finally generated gradient histogram feature vector is 7 × 15 × 36 — 3780. When the color feature vector is calculated according to the hexagonal pyramid color model, the human head image can be divided into 4 × 4 image blocks with 8 × 8 pixels, the dimension of each image block is 3, the average value of HSV colors of all pixel points in each image block is calculated, the average values of the colors in the 4 × 4 image blocks are counted to form the color feature vector, and then the dimension of the total color feature vector is 16 × 3 — 48 dimensions.
It should be understood that, in order to improve the accuracy of the detection of the gesture information by the human target-based characterization model, the recognition degree of the features is increased by using the fusion features, the feature fusion combines multiple features together, and although the accuracy of the detection of the gesture information can be improved, the extraction process of the model detection can involve the calculation of the multiple features, the dimension of the multiple features is inevitably higher than that of a single feature, and the feature with too high dimension can cause serious influence on the real-time detection of the human target-based characterization model. Therefore, in the feature fusion process, besides selecting the combined features, the feature dimension needs to be reduced to avoid that the features with too high dimension have influence on the real-time performance of the characterization model based on the human body target. In the embodiment, a principal component analysis algorithm is adopted to determine the fusion characteristic obtained by combining the gradient histogram characteristic and the color characteristic.
Specifically, generating the target fusion feature according to the gradient histogram feature vector set and the color feature vector set by using a preset principal component analysis algorithm may include: acquiring a gradient histogram feature vector according to the gradient histogram feature vector set, and acquiring a color feature vector according to the color feature vector set; determining an original feature matrix according to the gradient histogram feature vector and the color feature vector; determining a covariance matrix of the original feature matrix, and determining an eigenvalue and an eigenvector matrix of the covariance matrix; determining a transformation feature matrix of the principal component according to the feature value and the feature vector matrix through a preset principal component analysis algorithm; and fusing according to the original feature matrix and the transformed feature matrix to obtain a fused feature matrix, and determining target fusion features according to the fused feature matrix. The target fusion feature is obtained by respectively calculating a gradient histogram feature vector set and a color feature vector set of a partitioning structure, combining the gradient histogram feature vector set and the color feature vector set, and reducing the dimension by adopting a principal component analysis algorithm.
It should be understood that the preset principal component analysis algorithm may be a principal component analysis algorithm, the target fusion features generated by the principal component analysis algorithm retain information of original feature vectors, multiple features such as color, shape and texture of an image can be synthesized, redundant feature information is eliminated, the dimension of a feature space is reduced, the amount of calculation for detecting posture information by a subsequent human body target-based representation model is reduced to a greater extent, and thus the detection real-time performance of the human body target-based representation model is improved.
Step S103: and judging whether the video monitoring object meets a preset monitoring gesture according to the gesture change information so as to obtain a gesture judgment result.
It should be noted that, in the remote monitoring platform client mode, the remote control platform acquires the attitude information of the video monitored object within the period time for judgment, extracts the corresponding attitude information through the video image of the video monitored object within the period time shot by the camera, and judges whether the video monitored object meets the preset monitoring attitude according to the attitude change information to obtain an attitude judgment result, where the preset monitoring attitude may be a state where a remote person is in a large monitoring and monitoring screen, and the preset monitoring attitude may be set according to an actual situation, which is not limited in this embodiment.
Step S104: and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result.
It should be understood that, in order to improve the accuracy of remote monitoring of an unmanned vehicle, whether a remote driver monitors a large monitoring screen needs to be detected, the remote control platform acquires attitude information of a video monitoring object within a period time to perform judgment in a client mode of the remote monitoring platform, corresponding attitude information can be extracted from a video image of the video monitoring object within the period time, which is shot by a camera, and whether the video monitoring object meets a preset monitoring attitude is judged according to attitude change information to obtain an attitude judgment result. The preset monitoring posture can be used for monitoring and monitoring a large screen state for remote personnel.
Specifically, when the video monitoring object is judged to be in a monitoring and monitoring large screen state according to the attitude change information, the remote personnel is determined to be in an online safety monitoring state, and the remote personnel can respectively perform visual monitoring on the driving environment and the driving state of the vehicle through the remote monitoring platform client; and when the video monitoring object is judged not to be in the large monitoring screen state according to the attitude change information, determining that the remote personnel is not in the online safety monitoring state, and in order to avoid that the remote monitoring system can influence the driving safety of the automobile and even endanger the public safety when losing the safety monitoring, the remote control platform sends a parking instruction to the unmanned automobile by using 5G so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
In the embodiment, the attitude information of the video monitoring object within the period time is acquired by the remote control platform in the client mode of the remote monitoring platform; determining attitude change information of the video monitoring object through the attitude information based on a characterization model of a human body target; judging whether the video monitoring object meets a preset monitoring posture or not according to the posture change information so as to obtain a posture judgment result; and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result. In this embodiment, the remote control platform ensures that the remote personnel are in the online safety monitoring state through the above-mentioned mode, and the remote personnel can carry out visual monitoring to the driving environment and the driving state of vehicle through the remote monitoring platform client respectively this moment, promotes the control accuracy of vehicle, has solved current remote monitoring system and can cause the technical problem who influences the car driving safety when losing safety monitoring and endanger public safety even.
Furthermore, an embodiment of the present invention further provides a storage medium having an automatic driving-based remote confirmation program stored thereon, where the automatic driving-based remote confirmation program is executed by a processor to perform the steps of the automatic driving-based remote confirmation method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
Referring to fig. 4, fig. 4 is a block diagram illustrating a first embodiment of an automatic driving-based remote confirmation apparatus according to the present invention.
As shown in fig. 4, the remote confirmation apparatus based on automatic driving according to the embodiment of the present invention includes:
the system comprises a judging module 10, a remote monitoring platform client mode and a control module, wherein the judging module is used for judging whether the requirement of remote personnel on-line safety monitoring is met or not in the remote monitoring platform client mode, the remote monitoring platform client mode is used for acquiring vehicle driving monitoring data for an unmanned automobile and judging a driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; and if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a client mode of the remote monitoring platform through the remote driving control operation.
It should be noted that, remote personnel can respectively perform visual monitoring on the driving environment and the state of the vehicle through a remote monitoring platform client or a video monitoring module in the mobile phone APP. In this embodiment, a remote person performs visual monitoring on the driving environment and state of a vehicle in a client mode of a remote monitoring platform through the remote control platform. Based on the 5G remote control driving technology, when the unmanned automobile is in a remote control driving mode, the control authority is taken over by the full authority of a remote control platform, signals and images acquired by a vehicle sensor are transmitted to a remote platform, the information such as the images is displayed in a large monitoring screen or a simulated cab in real time after the information is processed by the platform, a remote driver performs remote control on the unmanned automobile in the simulated cab according to the signals and the images, control signals such as a steering wheel, an accelerator and pedals output by the remote driver are transmitted back to a vehicle end controller of the unmanned automobile in a 4G/5G mode and the like, the vehicle end controller performs recognition and execution distribution of the intention of the driver, and each execution system performs the intention execution. However, the remote control system has the problems that the driving state of the large-screen unmanned automobile cannot be monitored and monitored due to the fact that a remote driver leaves sight or fingers leave for operation, and the like, and the control of the unmanned automobile is not timely.
Specifically, in order to improve the accuracy of remote monitoring of the unmanned vehicle, whether a remote driver monitors a large monitoring screen needs to be detected. Wherein, the mode of judging whether to satisfy the requirement of the online safety monitoring of remote personnel can adopt the multiple, and this embodiment explains with the judgement mode that detects remote driver's attitude information: the remote control platform acquires attitude information of a video monitoring object within a period time in a remote monitoring platform client mode; determining attitude change information of the video monitoring object through the attitude information based on a characterization model of a human body target; judging whether the video monitoring object meets a preset monitoring posture or not according to the posture change information so as to obtain a posture judgment result; and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result. The video monitoring object is a remote driver, the cycle time may be set to 1min according to the actual situation, and the specific cycle time is not limited in this embodiment.
It should be understood that, before determining the posture change information of the video monitoring object through the posture information based on the human body target characterization model, a human body target characterization model needs to be constructed, which may be constructed according to a plurality of human body features, and this embodiment is described with the human body target characterization model constructed according to the human body head-shoulder features, and specifically includes: constructing a human body target detection algorithm based on the head-shoulder convolution characteristics according to a preset region full convolution neural network; and determining a human body head-shoulder model through the human body target detection algorithm based on the head-shoulder convolution characteristics, and taking the human body head-shoulder model as a representation model of a human body target in a video monitoring scene. In this embodiment, compared with the human body whole-body model, the human body head-shoulder model constructed in this embodiment has better robustness to the posture change of the remote driver, and the probability that the target of the remote driver is shielded can be reduced to a certain extent by adopting the human body head-shoulder model to recognize the posture of the remote driver.
It should be noted that the remote monitoring platform client mode acquires vehicle driving monitoring data for the unmanned vehicle, and judges a driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation; when the automatic driving function of the unmanned automobile is started, vehicle driving monitoring data of the unmanned automobile are obtained, and the driving action to be executed at present is judged according to the vehicle driving monitoring data. The vehicle driving monitoring data may include: communication conditions, road condition information, driving speed and the like; the communication conditions comprise 5G communication, GPS or Beidou satellite signals and the like; the road condition information comprises information such as lane lines, traffic signs, traffic participants and barriers; the driving speed refers to the driving speed set by the vehicle, such as the speed per hour when the vehicle is automatically driven is not higher than 10 km/h; and the speed per hour is not higher than 5km/h when turning.
It should be understood that after the unmanned function is started, the current driving environment state of the unmanned vehicle is acquired through the vehicle-mounted sensing system, and the acquired data is sent to the vehicle-mounted positioning planning decision control system through the vehicle-mounted Ethernet and other communication modes, wherein the communication modes among the vehicle-mounted units CAN adopt LVDS, USB, CAN bus, WIFI, 5G and other communication modes besides the vehicle-mounted Ethernet; the decision logic judgment of automatic driving is carried out through a decision unit in the vehicle-mounted positioning planning decision control system according to received visual target signals, radar signals, positioning signals, route planning, control commands of a remote monitoring and control system and the like, and driving actions to be executed by the current unmanned vehicle are judged, for example: and judging whether the action to be executed currently is forward movement, left turning, right turning, lane changing or parking according to the received information.
Specifically, the vehicle-mounted perception system can be composed of a vision perception processing system and an ultrasonic radar processing system. The vision perception processing system consists of a panoramic all-round looking system consisting of N high-definition fisheye wide-angle cameras, M high-definition forward-looking cameras and a vision processing controller. High-definition video images shot by the panoramic all-round looking system and the high-definition front-view camera are transmitted to the vision processing controller, all images are processed by the vision processor, clear views in the front (Q degree visual angle range), the front S range, the lateral W range and the rear L range of a running vehicle are formed, and the clear views are transmitted to the far-end background through 5G. The vision processor performs data processing on the video images and outputs target-level information to the vehicle-mounted positioning and planning decision control system, wherein the vision processor has the functions of lane line identification, traffic sign identification, traffic participant and obstacle identification and the like. The ultrasonic radar processing system consists of 12 ultrasonic radars and a radar controller, acquires the distance information of obstacles of a running vehicle, and outputs the distance position information of a target object to the vehicle-mounted positioning planning decision control system after processing.
And if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing the automatic driving operation of the unmanned automobile according to the driving operation instruction. In this embodiment, the first driving action refers to a precise driving action, such as a steering wheel, an accelerator, and a brake. And when the decision unit judges that the current precise driving action is required to be executed, automatically receiving a driving operation instruction of the unmanned automobile, and executing the automatic driving operation of the unmanned automobile according to the driving operation instruction. For example, the current brake action is to be executed, a brake driving operation instruction automatically sent by a vehicle-mounted positioning planning decision control system is received, and the unmanned automobile executes the brake operation according to the brake driving operation instruction. The vehicle-mounted positioning planning decision control system mainly comprises a positioning module and a planning decision module; and the positioning module receives a high-definition map positioning signal as main positioning information, and is connected with a positioning signal of the 5G base station and a peripheral environment signal of the vision processing system in parallel to perform comprehensive auxiliary positioning correction.
Further, if the first driving action is to be executed currently, receiving a driving operation command of the unmanned vehicle, and executing the automatic driving operation of the unmanned vehicle according to the driving operation command further includes: generating a corresponding control command according to the first driving operation; and responding to the control command, and executing automatic driving operation of the unmanned automobile. In the embodiment, the precise driving action to be executed at present is judged through a decision unit, and a corresponding control instruction is generated according to the precise driving action; and responding to the control command through the vehicle-mounted execution system, and executing the automatic driving operation of the unmanned automobile. Specifically, the execution system receives control commands of a target vehicle speed, a target driving torque, a target braking torque, a target gear, a target steering angle, a steering angular speed and the like sent by the vehicle-mounted positioning planning decision control system, responds the control commands in real time, and returns related control results. For example: the current speed reduction operation is executed, and a vehicle-mounted positioning planning decision control system sends out a control command for reducing the vehicle speed to 9km/h so as to enable the unmanned vehicle to adjust the current vehicle speed to 9 km/h. The executing system consists of a power output and transmission control system, a brake control system, a steering control system and the like of the vehicle.
And if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction. In this embodiment, the second driving action refers to a non-precise driving action, such as an action of starting, stopping, and the like, wherein the action of starting, stopping, and the like can be determined by the unmanned vehicle end according to the driving monitoring data; whether dangerous condition appears in current unmanned vehicle of remote control platform customer end or cell-phone APP etc. monitoring through remote control platform, when dangerous condition appearing, then generate corresponding inaccurate driving action, for example: when a user of the remote control platform monitors that a lane changing vehicle suddenly appears right in front of the current unmanned automobile through the mobile phone APP, emergency stop operation needs to be executed; or when the vehicle runs through the red light on the left side when passing through the intersection, the emergency stop operation needs to be executed.
Because the non-precise driving action is completely executed by the unmanned automobile end, certain operation difficulty exists, and the safety is not high; secondly, some sensors with high specification and precision and the like need to be arranged at the end of the unmanned automobile, so that the cost of the unmanned automobile is increased. Therefore, when non-precise driving actions such as parking and the like are to be executed, driving and parking operation instructions sent by the remote control platform through 5G are automatically received, and the use of high-specification and high-precision sensors such as laser radar is saved through the visual observation of a driver of the remote control platform, so that the overhigh cost of the automobile caused by completely adopting automatic driving is avoided; and the unmanned automobile executes parking operation according to the parking operation instruction. Wherein the remote control platform has the highest priority for the stop command of the vehicle.
Further, if the second driving action is to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction further includes: receiving a driving operation instruction of a second driving action sent by the remote control platform by using 5G, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction; and sending execution result feedback information of the second driving action to the remote control platform so that the remote control platform can determine whether the unmanned automobile completes the driving operation instruction according to the execution result feedback information. In this embodiment, the remote control platform mainly includes a vehicle-mounted 5G communication module, a 5G base station, a 5G core network and area network, a remote monitoring and cloud computing platform, a mobile phone, an APP, and the like. When the remote control platform monitors that the current unmanned automobile needs to execute non-precise driving action, if parking operation is carried out, a driver of the remote control platform sends a parking driving operation instruction to the unmanned automobile end by using 5G; the unmanned automobile executes parking operation according to the parking operation instruction and sends feedback information of an execution result to the remote control platform; and judging whether the current unmanned automobile completes the parking driving operation instruction or not by a driver of the remote control platform according to the feedback information of the execution result, and if not, retransmitting the parking instruction to the unmanned automobile end.
And the sending module 20 is configured to send a parking instruction to the unmanned vehicle by using 5G if the parking instruction is not satisfied, so that the unmanned vehicle executes a parking operation of the unmanned vehicle according to the parking instruction.
It should be noted that the remote control platform applies a parking instruction sent by 5G to the unmanned vehicle, where the parking instruction is sent when the remote control platform determines that the unmanned vehicle does not satisfy the safe driving environment or the safe driving state in any one of the first mode and the second mode, or when the remote control platform does not satisfy the requirement of the safety monitoring. Wherein, the first mode may refer to a remote monitoring platform client mode; the second mode may refer to a handset APP mode. Specifically, in this embodiment, the remote control platform is used to determine whether the requirement of online safety monitoring of the remote personnel is met in the remote monitoring platform client mode, and if not, the remote control platform sends a parking instruction to the unmanned vehicle by using 5G, so that the unmanned vehicle executes the parking operation of the unmanned vehicle according to the parking instruction for explanation.
It is easy to understand that, the remote personnel can respectively carry out visual monitoring on the driving environment and the driving state of the vehicle through the remote monitoring platform client or the video monitoring module in the mobile phone APP. Remote control driving can be carried out on a remote monitoring client, and remote control sending release and parking instructions can also be carried out on a mobile phone APP through a mobile phone screen. When the mobile phone APP is used for remote control monitoring, an eyeball tracking system needs to be developed on the mobile phone APP, when the mobile phone APP is used for remotely monitoring the running state of a vehicle, the driving state of the unmanned vehicle of a mobile phone screen needs to be monitored through eyes in the whole process, a finger needs to be in continuous contact with the mobile phone screen, the sight line leaves or the finger leaves the screen, and if any condition is met, the APP sends a remote parking command to the unmanned vehicle; when the remote control driving is carried out through the remote monitoring client, a sight tracking system and a remote parking or driving button are also developed on the remote monitoring platform, the vehicle is allowed to automatically drive when the remote parking button is pressed, and the vehicle stops immediately when the button is released. In the remote monitoring and control system, a mobile phone APP mode is preferred, and when the mobile phone APP mode does not work, the mode is switched to a remote monitoring client side mode; both must be operated in a mode otherwise the autonomous vehicle is in a standstill. And secondly, when the remote control platform does not meet the requirement of safety monitoring, a parking instruction is automatically sent to the unmanned automobile end, so that the unmanned automobile executes parking operation according to the parking instruction.
The remote confirmation apparatus based on automatic driving in this embodiment includes: the system comprises a judging module 10, a remote monitoring platform client mode and a control module, wherein the judging module is used for judging whether the requirement of remote personnel on-line safety monitoring is met or not in the remote monitoring platform client mode, the remote monitoring platform client mode is used for acquiring vehicle driving monitoring data for an unmanned automobile and judging a driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; and if the second driving action is required to be executed currently, receiving a driving operation instruction sent by the remote control platform, and executing the remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a client mode of the remote monitoring platform through the remote driving control operation. And the sending module 20 is configured to send a parking instruction to the unmanned vehicle by using 5G if the parking instruction is not satisfied, so that the unmanned vehicle executes a parking operation of the unmanned vehicle according to the parking instruction. In this embodiment, the remote control platform ensures that the remote personnel are in the online safety monitoring state through the above-mentioned mode, and the remote personnel can carry out visual monitoring to the driving environment and the driving state of vehicle through the remote monitoring platform client respectively this moment, promotes the control accuracy of vehicle, has solved current remote monitoring system and can cause the technical problem who influences the car driving safety when losing safety monitoring and endanger public safety even.
Other embodiments or specific implementation manners of the remote confirmation device based on automatic driving according to the present invention may refer to the above-mentioned embodiments of the remote confirmation method based on automatic driving, and are not described herein again.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not elaborated in this embodiment may be referred to the automatic driving-based remote confirmation method provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An automatic driving-based remote confirmation method, characterized by comprising the steps of:
the remote control platform judges whether the requirement of remote personnel on-line safety monitoring is met or not in a remote monitoring platform client mode, the remote monitoring platform client mode acquires vehicle driving monitoring data for the unmanned automobile, and driving actions to be executed at present are judged according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation;
if the parking instruction is not satisfied, the remote control platform sends a parking instruction to the unmanned automobile by using 5G, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
2. The remote confirmation method based on automatic driving of claim 1, wherein the remote control platform determines whether the requirement of remote personnel on-line safety monitoring is met in a client mode of the remote monitoring platform, and the method comprises the following steps:
the remote control platform acquires attitude information of a video monitoring object within a period time in a remote monitoring platform client mode;
determining attitude change information of the video monitoring object through the attitude information based on a characterization model of a human body target;
judging whether the video monitoring object meets a preset monitoring posture or not according to the posture change information so as to obtain a posture judgment result;
and judging whether the requirements of remote personnel on-line safety monitoring are met or not according to the posture judgment result.
3. The automated driving-based remote confirmation method of claim 2, wherein before the human target-based characterization model determines the pose change information of the video surveillance object from the pose information, the method further comprises:
constructing a human body target detection algorithm based on the head-shoulder convolution characteristics according to a preset region full convolution neural network;
and determining a human body head-shoulder model through the human body target detection algorithm based on the head-shoulder convolution characteristics, and taking the human body head-shoulder model as a representation model of a human body target in a video monitoring scene.
4. The automated driving-based remote confirmation method of claim 2, wherein the human target-based characterization model determines pose change information of the video surveillance object from the pose information, comprising:
determining a gradient histogram feature vector set and a color feature vector set through the posture information based on a human body target representation model;
generating target fusion characteristics according to the gradient histogram characteristic vector set and the color characteristic vector set through a preset principal component analysis algorithm;
classifying the target fusion characteristics according to a support vector machine classifier to obtain a classification result;
and generating the attitude change information of the video monitoring object according to the classification result.
5. The automated driving-based remote validation method of claim 4, wherein the human target-based characterization model determines a gradient histogram feature vector set and a color feature vector set from the pose information, comprising:
determining a direction gradient histogram model and a hexagonal pyramid color model based on the characterization model of the human body target;
determining a gradient histogram feature vector set through the attitude information according to the direction gradient histogram model;
determining a posture image according to the hexagonal pyramid color model through the posture information, and dividing the posture image into a plurality of image blocks;
and determining the color mean value of the pixel points in the image block, and generating a color feature vector set according to the color mean value.
6. The automated driving-based remote validation method of claim 5, wherein determining a set of gradient histogram feature vectors from the pose information according to the direction gradient histogram model comprises:
determining an attitude image according to the direction gradient histogram model through the attitude information, and converting the attitude image into a gray image;
determining the gray value of a pixel point in the gray image, and determining the gradient amplitude and the gradient direction corresponding to the pixel point according to the gray value;
dividing the gray level image into a plurality of cells, and determining a histogram of gradient directions of the cells according to the gradient amplitude and the gradient directions;
selecting a preset number of histograms of the gradient directions, and generating a target image block according to the preset number of histograms of the gradient directions;
and normalizing the histogram of the gradient direction in the target image block to obtain a plurality of gradient histogram feature vectors, and generating a gradient histogram feature vector set according to the gradient histogram feature vectors.
7. The automated driving-based remote confirmation method according to claim 4, wherein the generating target fusion features from the gradient histogram feature vector set and the color feature vector set by a preset principal component analysis algorithm comprises:
acquiring a gradient histogram feature vector according to the gradient histogram feature vector set, and acquiring a color feature vector according to the color feature vector set;
determining an original feature matrix according to the gradient histogram feature vector and the color feature vector;
determining a covariance matrix of the original feature matrix, and determining an eigenvalue and an eigenvector matrix of the covariance matrix;
determining a transformation feature matrix of the principal component according to the feature value and the feature vector matrix through a preset principal component analysis algorithm;
and fusing according to the original characteristic matrix and the transformed characteristic matrix to obtain a fused characteristic matrix, and determining target fusion characteristics according to the fused characteristic matrix.
8. An automatic driving-based remote confirmation apparatus, comprising:
the remote monitoring platform client mode is used for acquiring vehicle driving monitoring data for the unmanned automobile and judging the driving action to be executed currently according to the vehicle driving monitoring data; if the first driving action is required to be executed at present, receiving a driving operation instruction of the unmanned automobile, and executing automatic driving operation of the unmanned automobile according to the driving operation instruction; if the second driving action is required to be executed currently, receiving a driving operation instruction sent by a remote control platform, and executing remote driving control operation of the unmanned automobile according to the driving operation instruction so as to enter a remote monitoring platform client mode through the remote driving control operation;
and the sending module is used for sending a parking instruction to the unmanned automobile by using 5G if the parking instruction is not met, so that the unmanned automobile executes the parking operation of the unmanned automobile according to the parking instruction.
9. An autonomous driving-based remote confirmation apparatus, the autonomous driving-based remote confirmation apparatus comprising: a memory, a processor, and an autonomous driving based remote confirmation program stored on the memory and executable on the processor, the autonomous driving based remote confirmation program configured to implement the autonomous driving based remote confirmation method of any of claims 1 to 7.
10. A storage medium having stored thereon an automated driving-based remote confirmation program that, when executed by a processor, implements an automated driving-based remote confirmation method according to any one of claims 1 to 7.
CN202110487438.7A 2020-05-15 2021-04-30 Remote confirmation method, device and equipment based on automatic driving and storage medium Active CN112965504B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020104165719 2020-05-15
CN202010416571.9A CN111580522A (en) 2020-05-15 2020-05-15 Control method for unmanned vehicle, and storage medium

Publications (2)

Publication Number Publication Date
CN112965504A CN112965504A (en) 2021-06-15
CN112965504B true CN112965504B (en) 2022-08-09

Family

ID=72112845

Family Applications (7)

Application Number Title Priority Date Filing Date
CN202010416571.9A Pending CN111580522A (en) 2020-05-15 2020-05-15 Control method for unmanned vehicle, and storage medium
CN202110487556.8A Active CN112987759B (en) 2020-05-15 2021-04-30 Image processing method, device, equipment and storage medium based on automatic driving
CN202110487434.9A Active CN112965502B (en) 2020-05-15 2021-04-30 Visual tracking confirmation method, device, equipment and storage medium
CN202210628630.8A Pending CN114911242A (en) 2020-05-15 2021-04-30 Control method for unmanned vehicle, and storage medium
CN202110487466.9A Active CN113031626B (en) 2020-05-15 2021-04-30 Safety authentication method, device, equipment and storage medium based on automatic driving
CN202110487435.3A Active CN112965503B (en) 2020-05-15 2021-04-30 Multi-path camera fusion splicing method, device, equipment and storage medium
CN202110487438.7A Active CN112965504B (en) 2020-05-15 2021-04-30 Remote confirmation method, device and equipment based on automatic driving and storage medium

Family Applications Before (6)

Application Number Title Priority Date Filing Date
CN202010416571.9A Pending CN111580522A (en) 2020-05-15 2020-05-15 Control method for unmanned vehicle, and storage medium
CN202110487556.8A Active CN112987759B (en) 2020-05-15 2021-04-30 Image processing method, device, equipment and storage medium based on automatic driving
CN202110487434.9A Active CN112965502B (en) 2020-05-15 2021-04-30 Visual tracking confirmation method, device, equipment and storage medium
CN202210628630.8A Pending CN114911242A (en) 2020-05-15 2021-04-30 Control method for unmanned vehicle, and storage medium
CN202110487466.9A Active CN113031626B (en) 2020-05-15 2021-04-30 Safety authentication method, device, equipment and storage medium based on automatic driving
CN202110487435.3A Active CN112965503B (en) 2020-05-15 2021-04-30 Multi-path camera fusion splicing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (7) CN111580522A (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037553B (en) * 2020-08-31 2023-09-08 腾讯科技(深圳)有限公司 Remote driving method, device, system, equipment and medium
CN112305573A (en) * 2020-10-23 2021-02-02 上海伯镭智能科技有限公司 Unmanned vehicle route selection system based on big dipper
CN112613482A (en) * 2021-01-04 2021-04-06 深圳裹动智驾科技有限公司 Method and system for monitoring unmanned vehicle and computer equipment
CN112896193B (en) * 2021-03-16 2022-06-24 四川骏驰智行科技有限公司 Automobile remote auxiliary driving system and method
CN113189989B (en) * 2021-04-21 2022-07-01 东风柳州汽车有限公司 Vehicle intention prediction method, device, equipment and storage medium
CN113538311B (en) * 2021-07-22 2024-01-23 浙江赫千电子科技有限公司 Image fusion method of vehicle-mounted redundant camera based on subjective visual effect of human eyes
CN113467431A (en) * 2021-08-03 2021-10-01 上海智能新能源汽车科创功能平台有限公司 Remote monitoring and emergency intervention management system based on 5G communication
CN113320548A (en) * 2021-08-04 2021-08-31 新石器慧通(北京)科技有限公司 Vehicle control method, device, electronic equipment and storage medium
CN113865603B (en) * 2021-08-30 2024-06-07 东风柳州汽车有限公司 Shared unmanned vehicle path planning method, device, equipment and storage medium
CN113938617A (en) * 2021-09-06 2022-01-14 杭州联吉技术有限公司 Multi-channel video display method and equipment, network camera and storage medium
CN113867360A (en) * 2021-10-19 2021-12-31 北京三快在线科技有限公司 Method and device for controlling unmanned equipment based on remote accelerator
CN114115206A (en) * 2021-10-22 2022-03-01 湖南大学无锡智能控制研究院 Safe remote driving system
CN114162130B (en) * 2021-10-26 2023-06-20 东风柳州汽车有限公司 Driving assistance mode switching method, device, equipment and storage medium
CN114115207A (en) * 2021-11-23 2022-03-01 广州小鹏自动驾驶科技有限公司 Remote driving control method, equipment and system
CN114153227B (en) * 2021-11-30 2024-02-20 重庆大学 Unmanned aerial vehicle group key extraction and security authentication method based on GPS signals
CN113928283B (en) * 2021-11-30 2023-08-25 广州文远知行科技有限公司 Vehicle collision control method, device, equipment and medium
CN114545812A (en) * 2021-12-15 2022-05-27 株式会社Iat Remote vehicle driving method and system
CN114750806A (en) * 2022-05-11 2022-07-15 南京北路智控科技股份有限公司 Monorail crane remote driving method and system
CN115390484A (en) * 2022-06-27 2022-11-25 武汉路特斯科技有限公司 Vehicle remote control method and device, electronic equipment and storage medium
CN115277788B (en) * 2022-08-23 2024-04-26 石家庄开发区天远科技有限公司 Engineering vehicle remote control system and method
CN115497318A (en) * 2022-09-28 2022-12-20 东风悦享科技有限公司 Auxiliary driving platform suitable for public road remote driving
CN116916172B (en) * 2023-09-11 2024-01-09 腾讯科技(深圳)有限公司 Remote control method and related device
CN116931498B (en) * 2023-09-15 2023-11-21 北京易控智驾科技有限公司 Man-machine co-driving system, method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2338029A1 (en) * 2008-10-24 2011-06-29 Gray & Company, Inc. Control and systems for autonomously driven vehicles
CN107589745A (en) * 2017-09-22 2018-01-16 京东方科技集团股份有限公司 Drive manner, vehicle carried driving end, remotely drive end, equipment and storage medium
CN108292134A (en) * 2015-11-04 2018-07-17 祖克斯有限公司 Machine learning system and technology for optimizing remote operation and/or planner decision
EP3441838A1 (en) * 2017-08-08 2019-02-13 The Boeing Company Safety controls for network connected autonomous vehicle
CN109345836A (en) * 2018-10-26 2019-02-15 北理慧动(常熟)车辆科技有限公司 A kind of multi-mode unmanned vehicle remote control system and method
CN109409172A (en) * 2017-08-18 2019-03-01 安徽三联交通应用技术股份有限公司 Pilot's line of vision detection method, system, medium and equipment
KR20190041172A (en) * 2017-10-12 2019-04-22 엘지전자 주식회사 Autonomous vehicle and method of controlling the same
US10268191B1 (en) * 2017-07-07 2019-04-23 Zoox, Inc. Predictive teleoperator situational awareness
CN110103221A (en) * 2019-05-21 2019-08-09 深圳市超时空机器人有限公司 A kind of long-range drive manner, equipment and its system
CN110303884A (en) * 2019-07-10 2019-10-08 初速度(苏州)科技有限公司 A kind of anti-fatigue-driving method, system and device
CN110443111A (en) * 2019-06-13 2019-11-12 东风柳州汽车有限公司 Automatic Pilot target identification method
US10564638B1 (en) * 2017-07-07 2020-02-18 Zoox, Inc. Teleoperator situational awareness
CN111098863A (en) * 2019-12-12 2020-05-05 长城汽车股份有限公司 Remote driving request method and device for automatic driving vehicle and user terminal

Family Cites Families (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2003088055A1 (en) * 2002-04-02 2005-08-25 ソニー株式会社 Data processing method for content data, recording apparatus and reproducing apparatus
JP2004081264A (en) * 2002-08-23 2004-03-18 Hitachi Medical Corp Remotely-controlled medical system, and control modality for remotely controlling device
US7200868B2 (en) * 2002-09-12 2007-04-03 Scientific-Atlanta, Inc. Apparatus for encryption key management
US8146145B2 (en) * 2004-09-30 2012-03-27 Rockstar Bidco Lp Method and apparatus for enabling enhanced control of traffic propagation through a network firewall
JP4971917B2 (en) * 2007-09-11 2012-07-11 日本放送協会 Signature generation device, signature verification device, group management device, and program thereof
CN101520832A (en) * 2008-12-22 2009-09-02 康佳集团股份有限公司 System and method for verifying file code signature
KR20120072020A (en) * 2010-12-23 2012-07-03 한국전자통신연구원 Method and apparatus for detecting run and road information of autonomous driving system
US9452528B1 (en) * 2012-03-05 2016-09-27 Vecna Technologies, Inc. Controller device and method
CN102867395A (en) * 2012-10-11 2013-01-09 南京艾酷派物联网有限公司 Remote real-time monitoring device for fatigued drivers
CA2925733A1 (en) * 2013-09-30 2015-04-02 Huawei Technologies Co., Ltd. Encryption and decryption processing method, apparatus, and device
CN103594003B (en) * 2013-11-13 2015-11-25 安徽三联交通应用技术股份有限公司 A kind of for the remote monitoring of driver and the method for abnormity early warning
US9988047B2 (en) * 2013-12-12 2018-06-05 Magna Electronics Inc. Vehicle control system with traffic driving control
US10613627B2 (en) * 2014-05-12 2020-04-07 Immersion Corporation Systems and methods for providing haptic feedback for remote interactions
CN104202541A (en) * 2014-09-26 2014-12-10 北京华建纵横科技有限公司 Image synthesizer
CN105791258A (en) * 2014-12-26 2016-07-20 ***通信集团上海有限公司 Data transmission method, terminal and open platform
CN105049213A (en) * 2015-07-27 2015-11-11 小米科技有限责任公司 File signature method and device
CN105979517B (en) * 2015-11-10 2020-01-03 法法汽车(中国)有限公司 Network data transmission method and device based on vehicle
HRP20231656T1 (en) * 2016-04-14 2024-03-15 Rhombus Systems Group, Inc. System for verification of integrity of unmanned aerial vehicles
CN105704164B (en) * 2016-04-26 2018-12-07 威马汽车科技集团有限公司 Automotive safety monitoring method
CN105812129B (en) * 2016-05-10 2018-12-18 威马汽车科技集团有限公司 Travel condition of vehicle monitoring method
CN106056100B (en) * 2016-06-28 2019-03-08 重庆邮电大学 A kind of vehicle assisted location method based on lane detection and target following
CN106157304A (en) * 2016-07-01 2016-11-23 成都通甲优博科技有限责任公司 A kind of Panoramagram montage method based on multiple cameras and system
US10268844B2 (en) * 2016-08-08 2019-04-23 Data I/O Corporation Embedding foundational root of trust using security algorithms
JP6260067B1 (en) * 2016-08-09 2018-01-17 Kddi株式会社 Management system, key generation device, in-vehicle computer, management method, and computer program
WO2018067867A1 (en) * 2016-10-07 2018-04-12 Cyber Physical Systems, Inc. System and method for driving condition detection and notification
CN106394545A (en) * 2016-10-09 2017-02-15 北京汽车集团有限公司 Driving system, unmanned vehicle and vehicle remote control terminal
JP7160251B2 (en) * 2017-01-12 2022-10-25 モービルアイ ビジョン テクノロジーズ リミテッド Navigation system, method and program
US20180205729A1 (en) * 2017-01-13 2018-07-19 GM Global Technology Operations LLC Method and apparatus for encryption, decryption and authentication
JP6938177B2 (en) * 2017-03-14 2021-09-22 パイオニア株式会社 Control device, control method, and program
CN107465673A (en) * 2017-07-27 2017-12-12 深圳市易成自动驾驶技术有限公司 Identity identifying method, device and the computer-readable recording medium of vehicle
TW201911255A (en) * 2017-08-08 2019-03-16 洪奇麟 Method, application, and apparatus capable of increasing driving safety or driving convenience with a water removal unit to provide a viewing area with less water content
US10437247B2 (en) * 2017-08-10 2019-10-08 Udelv Inc. Multi-stage operation of autonomous vehicles
CN107360413A (en) * 2017-08-25 2017-11-17 秦山 A kind of multi-view stereo image method for transmitting signals and system
KR101842009B1 (en) * 2017-08-31 2018-05-14 영남대학교 산학협력단 System and authentication method for vehicle remote key entry
JP2019043496A (en) * 2017-09-07 2019-03-22 株式会社デンソー Device, system and method for adjusting automatic operation
KR101852791B1 (en) * 2017-10-16 2018-04-27 (주)케이스마텍 Certification service system and method using user mobile terminal based secure world
CN108111604B (en) * 2017-12-21 2020-08-14 广州广电运通金融电子股份有限公司 Block chain consensus method, device and system, and identification information processing method and device
CN110070641A (en) * 2018-01-22 2019-07-30 江苏迪纳数字科技股份有限公司 A kind of intelligent travelling crane recorder of no screen
DE102018202738A1 (en) * 2018-02-23 2019-08-29 Bayerische Motoren Werke Aktiengesellschaft Remote-controlled parking assistance system with autonomous decision on the presence of a parking or Ausparkituation and corresponding parking method
CN108428357B (en) * 2018-03-22 2020-08-18 青岛慧拓智能机器有限公司 Parallel remote control driving system for intelligent internet vehicle
CN110549990A (en) * 2018-05-30 2019-12-10 郑州宇通客车股份有限公司 remote starting control method and system for unmanned vehicle
CN110780665B (en) * 2018-07-26 2022-02-08 比亚迪股份有限公司 Vehicle unmanned control method and device
CN109188932A (en) * 2018-08-22 2019-01-11 吉林大学 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving
CN110874945A (en) * 2018-08-31 2020-03-10 百度在线网络技术(北京)有限公司 Roadside sensing system based on vehicle-road cooperation and vehicle control method thereof
CN110969592B (en) * 2018-09-29 2024-03-29 北京嘀嘀无限科技发展有限公司 Image fusion method, automatic driving control method, device and equipment
US20200041995A1 (en) * 2018-10-10 2020-02-06 Waymo Llc Method for realtime remote-operation of self-driving cars by forward scene prediction.
CN109598244B (en) * 2018-12-07 2023-08-22 吉林大学 Traffic signal lamp identification system and identification method thereof
CN109711349B (en) * 2018-12-28 2022-06-28 百度在线网络技术(北京)有限公司 Method and device for generating control instruction
CN109808709B (en) * 2019-01-15 2021-08-03 北京百度网讯科技有限公司 Vehicle driving guarantee method, device and equipment and readable storage medium
CN109992953A (en) * 2019-02-18 2019-07-09 深圳壹账通智能科技有限公司 Digital certificate on block chain signs and issues, verification method, equipment, system and medium
CN110032176A (en) * 2019-05-16 2019-07-19 广州文远知行科技有限公司 Long-range adapting method, device, equipment and the storage medium of pilotless automobile
CN210405369U (en) * 2019-05-28 2020-04-24 长安大学 Automatic driving vehicle controller
CN110557738B (en) * 2019-07-12 2022-06-07 安徽中科美络信息技术有限公司 Vehicle monitoring information safe transmission method and system
CN110300285B (en) * 2019-07-17 2021-09-10 北京智行者科技有限公司 Panoramic video acquisition method and system based on unmanned platform
CN110796763A (en) * 2019-09-24 2020-02-14 北京汽车集团有限公司 Vehicle state data processing method, device and system
CN110912690A (en) * 2019-11-01 2020-03-24 中国第一汽车股份有限公司 Data encryption and decryption method, vehicle and storage medium
CN110884428B (en) * 2019-11-11 2022-10-11 长春理工大学 Vehicle-mounted panoramic driving auxiliary device and method based on catadioptric panoramic camera
CN110850855A (en) * 2019-11-26 2020-02-28 奇瑞汽车股份有限公司 Wireless network remote vehicle control system and method

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2338029A1 (en) * 2008-10-24 2011-06-29 Gray & Company, Inc. Control and systems for autonomously driven vehicles
CN108292134A (en) * 2015-11-04 2018-07-17 祖克斯有限公司 Machine learning system and technology for optimizing remote operation and/or planner decision
US10268191B1 (en) * 2017-07-07 2019-04-23 Zoox, Inc. Predictive teleoperator situational awareness
US10564638B1 (en) * 2017-07-07 2020-02-18 Zoox, Inc. Teleoperator situational awareness
EP3441838A1 (en) * 2017-08-08 2019-02-13 The Boeing Company Safety controls for network connected autonomous vehicle
CN109409172A (en) * 2017-08-18 2019-03-01 安徽三联交通应用技术股份有限公司 Pilot's line of vision detection method, system, medium and equipment
CN107589745A (en) * 2017-09-22 2018-01-16 京东方科技集团股份有限公司 Drive manner, vehicle carried driving end, remotely drive end, equipment and storage medium
KR20190041172A (en) * 2017-10-12 2019-04-22 엘지전자 주식회사 Autonomous vehicle and method of controlling the same
CN109345836A (en) * 2018-10-26 2019-02-15 北理慧动(常熟)车辆科技有限公司 A kind of multi-mode unmanned vehicle remote control system and method
CN110103221A (en) * 2019-05-21 2019-08-09 深圳市超时空机器人有限公司 A kind of long-range drive manner, equipment and its system
CN110443111A (en) * 2019-06-13 2019-11-12 东风柳州汽车有限公司 Automatic Pilot target identification method
CN110303884A (en) * 2019-07-10 2019-10-08 初速度(苏州)科技有限公司 A kind of anti-fatigue-driving method, system and device
CN111098863A (en) * 2019-12-12 2020-05-05 长城汽车股份有限公司 Remote driving request method and device for automatic driving vehicle and user terminal

Also Published As

Publication number Publication date
CN112965503A (en) 2021-06-15
CN111580522A (en) 2020-08-25
CN112987759B (en) 2023-06-30
CN113031626B (en) 2022-09-06
CN112965502A (en) 2021-06-15
CN112965504A (en) 2021-06-15
CN112987759A (en) 2021-06-18
CN114911242A (en) 2022-08-16
CN112965503B (en) 2022-09-16
CN113031626A (en) 2021-06-25
CN112965502B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN112965504B (en) Remote confirmation method, device and equipment based on automatic driving and storage medium
US11747809B1 (en) System and method for evaluating the perception system of an autonomous vehicle
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
CN106485233B (en) Method and device for detecting travelable area and electronic equipment
KR102043060B1 (en) Autonomous drive apparatus and vehicle including the same
CN108638999B (en) Anti-collision early warning system and method based on 360-degree look-around input
EP3343438A1 (en) Automatic parking system and automatic parking method
US11112791B2 (en) Selective compression of image data during teleoperation of a vehicle
CN110678872A (en) Direct vehicle detection as 3D bounding box by using neural network image processing
US9096233B2 (en) Visual confirmation evaluating apparatus and method
WO2021096629A1 (en) Geometry-aware instance segmentation in stereo image capture processes
CN111736604A (en) Remote driving control method, device, equipment and storage medium
US10522041B2 (en) Display device control method and display device
DE112018004507T5 (en) INFORMATION PROCESSING DEVICE, MOTION DEVICE AND METHOD AND PROGRAM
US20190135169A1 (en) Vehicle communication system using projected light
CN112084232A (en) Vehicle driving risk assessment method and device based on visual field information of other target vehicles
CN114764782A (en) Image synthesis in multi-view automotive and robotic systems
WO2021217575A1 (en) Identification method and identification device for object of interest of user
CN114119955A (en) Method and device for detecting potential dangerous target
CN115402322A (en) Intersection driving assistance method and system, electronic device and storage medium
CN114298908A (en) Obstacle display method and device, electronic equipment and storage medium
EP4102323B1 (en) Vehicle remote control device, vehicle remote control system, vehicle remote control method, and vehicle remote control program
US20220324387A1 (en) Display control system, display control method, and non-transitory storage medium
EP3896604A1 (en) Vehicle driving and monitoring system; method for maintaining a sufficient level of situational awareness; computer program and computer readable medium for implementing the method
JP7487178B2 (en) Information processing method, program, and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant