CN111243335A - Scene description method in autonomous unmanned system - Google Patents

Scene description method in autonomous unmanned system Download PDF

Info

Publication number
CN111243335A
CN111243335A CN202010061799.0A CN202010061799A CN111243335A CN 111243335 A CN111243335 A CN 111243335A CN 202010061799 A CN202010061799 A CN 202010061799A CN 111243335 A CN111243335 A CN 111243335A
Authority
CN
China
Prior art keywords
scene
autonomous unmanned
unmanned system
library
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010061799.0A
Other languages
Chinese (zh)
Other versions
CN111243335B (en
Inventor
李玉峰
曹晨红
朱泓艺
陆肖元
王鹏
李江涛
姜超
张瑰琦
岳玲
马启皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Transpacific Technology Development Ltd
Shanghai Broadband Technology and Application Engineering Research Center
Original Assignee
Beijing Transpacific Technology Development Ltd
Shanghai Broadband Technology and Application Engineering Research Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Transpacific Technology Development Ltd, Shanghai Broadband Technology and Application Engineering Research Center filed Critical Beijing Transpacific Technology Development Ltd
Priority to CN202010061799.0A priority Critical patent/CN111243335B/en
Publication of CN111243335A publication Critical patent/CN111243335A/en
Application granted granted Critical
Publication of CN111243335B publication Critical patent/CN111243335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a scene description method in an autonomous unmanned system, which comprises the following steps: and constructing an external scene library and an internal scene library of the autonomous unmanned system offline. The external scene library comprises a static scene element and dynamic scene element library, an operation scene library and an abnormal event library. The internal scene library comprises behavior decision information, execution information of devices such as a motion controller and the like, and internal state information of a control system and an execution system. When the autonomous unmanned system runs, environment perception data of the autonomous unmanned system is obtained, key static scene elements and dynamic scene elements in an external scene are extracted, the key static scene elements and the key dynamic scene elements are defined through predefined classes, the key static scene elements and the key dynamic scene elements are described in a structured language mode, and corresponding internal scene state information is recorded. Meanwhile, the occurrence of an abnormal event is monitored, and when the abnormal event occurs, sensing data in a fixed time period before and after the event is stored in full information. The invention can effectively improve the robustness and safety guarantee of the autonomous unmanned system.

Description

Scene description method in autonomous unmanned system
Technical Field
The invention relates to a scene description method in an autonomous unmanned system.
Background
The autonomous unmanned system is one of important applications of artificial intelligence and is a symbolic result of artificial intelligence development. The autonomous unmanned system can replace human beings to execute predetermined tasks in various environments, and is widely applied to unmanned aerial vehicles, unmanned vehicles, rail transit automatic driving engineering, and the fields of service robots such as household cleaning robots and nursing robots. For an autonomous unmanned system, scene understanding is the core of its implementation of autonomous motion. On the basis of the existing scene understanding technology, if the external effective scene and the internal decision and operation of the autonomous unmanned system are rapidly captured and recorded in real time, the safety performance of the autonomous unmanned system can be effectively evaluated in an auxiliary manner; the method has the advantages that the possible safety accidents or abnormalities which are possibly generated are effectively analyzed, captured and early warned, the reasons are searched, and corresponding improvement is carried out in a targeted manner; the cause of the safety accident is effectively analyzed, safety responsibility is cleared, and the development and the improvement of unmanned system technology are promoted.
The traditional scene recording mode directly stores the recorded video or other sensing data, and the mode is not applicable to the scene recording of the autonomous unmanned system. For one reason, the limited storage capacity of the autonomous unmanned system cannot support long-time recording of raw data of the sensor, for example, a 16G memory card can only support about 120 minutes of 1080P video data recording. The prior art can filter still video frames or video frames with unchanged pictures to reduce the storage capacity of video image data, and can effectively reduce the data storage capacity of a fixed camera. However, the autonomous unmanned system has long running time, complex and variable running environment, various sensing data and large data volume, and therefore, the mode of recording original data such as video is inefficient and not suitable for the autonomous unmanned system. For the second reason, the conventional video recording mode cannot rapidly capture and record the internal decision and operation information of the autonomous unmanned system in real time, and cannot perform normalized recording on the internal decision operation information and the external scene.
In general, the concise, efficient, and accurate recording of scenes is one of the key technologies that facilitate the evolution of autonomous unmanned system security.
At present, in an autonomous unmanned system, external scene understanding is a relatively mature technology, scene measurement, symbol and concept information closely related to an executed task are obtained by fusing and analyzing data of a plurality of sensors, and a basis is provided for autonomous planning and decision of behaviors of the autonomous unmanned system. However, there is still a lack of a method for effectively recording an understood scene. On the basis of scene understanding, the method disclosed by the invention can obviously reduce the overhead of external scene recording, and can accurately restore the original scene according to the recording without losslessly reproducing the key scene.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a scene description method in an autonomous unmanned system. Aiming at a specific scene in the field of autonomous unmanned systems, a simple and accurate scene description language is provided to quickly capture and record effective external scenes and internal decisions and operations of the autonomous unmanned systems in real time, so that low-cost scene description and recording are realized, and the method is used for reproducing, sharing and analyzing scenes and quickly generating the scenes in a simulation environment. The method is expected to become an auxiliary method for safe tracking, feedback and improvement of the autonomous unmanned system, and effectively supports continuous evolution of the autonomous unmanned system.
In order to achieve the aim, the invention adopts the following technical scheme:
a method of scene description in an autonomous unmanned system, comprising the steps of:
(1) constructing an external scene library and an internal scene library of the autonomous unmanned system in an offline manner;
(2) establishing a topological structure semantic library of a scene, and describing the geometric position relation of scene elements relative to an autonomous unmanned system;
(3) establishing an abnormal event library of a scene, and setting an abnormal event triggering condition, such as defining a dangerous collision occurrence event, a multi-sensor data contradiction event and the like. Monitoring the occurrence of an abnormal event, and when the abnormal event occurs, storing all information of perception data in a fixed time period before and after the event occurs;
(4) the method comprises the steps of obtaining environment perception data of the autonomous unmanned system, extracting key static scene elements and dynamic scene elements in an external scene, defining through predefined classes, describing in a structured language mode, and recording corresponding internal scene information.
The step (1) specifically comprises the following steps:
(1.1) establishing an external scene element class library of the autonomous unmanned system, wherein the external scene element class library comprises static scene elements and dynamic scene elements, the static scene elements refer to static objects and attribute characteristic values thereof in the environment, such as information of roads, lane numbers of the roads, road boundaries and the like in the driving scene of the unmanned vehicle, information of traffic lights, extremely color states and the like; the dynamic scene element refers to an object of environmental activity and attribute characteristic values thereof, such as a vehicle, a pedestrian and other moving objects, and three-dimensional information and motion information thereof.
And (1.2) establishing an internal scene element class library of the autonomous unmanned system, wherein the class library comprises behavior decision information, execution information of equipment such as a motion controller and the like, and internal state information of a control system and an execution system.
(1.3) establishing autonomous unmanned system operation scene libraries, wherein each scene library comprises characteristic attribute characteristics of the scene, for example, the operation scene of an unmanned vehicle can be divided into an expressway, an urban traffic road, a rural road and the like, and the weather attribute of the operation scene is defined, for example, sunny days, rain, snow weather and the like. In addition, different operation scene libraries contain different scene libraries according to the scene characteristics.
The scene element category library is defined by classes by adopting an object-oriented idea, partial classes and default attributes thereof are predefined in the scene element category library, the classes can be inherited, and the attributes thereof can be dynamically expanded as required.
The step (4) specifically comprises the following steps:
and (4.1) acquiring various environment perception data of the autonomous unmanned system, including data acquired by various local sensors of the autonomous unmanned system, and data transmitted by other unmanned system equipment or Internet of things equipment through a wireless network (such as Internet of vehicles).
And (4.2) removing scene redundant information according to the environment perception data obtained in the step (4.1) and combining result information of scene understanding, and extracting key static scene elements and dynamic scene elements related to autonomous unmanned system action decision.
And (4.3) scene description is carried out in a structured language mode, the visual reproduction characteristic is achieved, and key scene information related to unmanned system decision can be reproduced according to the scene description language.
The step (4.3) specifically comprises the following steps:
(4.3.1) an object ego referring to the autonomous unmanned system ontology is created and used as a reference base point of a scene, the reference base point refers to a scene description and a starting point of scene restoration, and basic attributes of the ontology object include a global position (e.g., GPS position information), a line of sight, a traveling direction, a speed, three-dimensional information (height, width), and the like.
And (4.3.2) establishing a body coordinate system, taking the autonomous unmanned system body as an origin, taking the body orientation as a coordinate axis direction, and simply and accurately describing the relative position of the scene element in the body coordinate system through various position descriptors.
And (4.3.3) extracting static scene elements from the environment perception data of the autonomous unmanned system, combining prior information provided by a map, realizing robust and accurate extraction of the static scene elements, and instantiating element instances in the scene by using a predefined scene library.
(4.3.4) extracting dynamic scene elements from the environment perception data of the autonomous unmanned system, describing the position and motion state of moving objects such as vehicles, pedestrians, etc. within the visual range or perception range, and the context constraint of the corresponding static scene elements.
Describing in a structured language mode in the step (4), wherein the structured scene description is composed of declarative or expanded scene element instance statements, statements stating the relative spatial position and offset of the scene element, or importing other files composed of these statements; each sentence comprises an optional member list indented by a unit from the sentence itself, and the hierarchical structure of the structured scene description language and the position of each member in the hierarchy are strictly represented by the indentation.
Compared with the prior art, the invention has the beneficial effects that:
the method of the invention realizes the rapid capture and recording of the actual scene by providing a simple and accurate high-level language, thereby effectively reducing the overhead of scene recording; furthermore, the provided scene description language supports reuse and sharing of scenes, and is used for rapidly generating scenes in simulation so as to identify unknown dangerous cases in the autonomous unmanned system, and robustness and safety guarantee of the autonomous unmanned system are effectively improved.
Drawings
FIG. 1 is a schematic flow diagram of a process for carrying out the present invention.
FIG. 2 is a schematic diagram of an external scene library and an internal scene library constructed by the method of the present invention.
Fig. 3 is a composition example of an autonomous unmanned system scenario description method.
Fig. 4 is a schematic representation of each position in the body coordinate system.
Detailed Description
Specific embodiments of the present invention will be further described with reference to the accompanying drawings.
As shown in fig. 1, a scene description method in an autonomous unmanned system includes the following steps:
(1) an external scene library and an internal scene library of the autonomous unmanned system are constructed in an offline manner, as shown in fig. 2:
(1.1) establishing an external scene element class library of the autonomous unmanned system, wherein the external scene element class library comprises static scene elements and dynamic scene elements, and the static scene elements refer to static objects and attribute characteristic values thereof in the environment, such as information of roads, lane numbers of the roads, road boundaries and the like in the driving scene of the unmanned vehicle, information of traffic lights, color states of the traffic lights and the like; the dynamic scene element refers to an object moving in the environment and attribute characteristic values thereof, such as a vehicle, a pedestrian and other moving objects, and three-dimensional information and motion information thereof.
Specifically, the scene element adopts an object-oriented idea, is defined by classes, a part of classes and default attributes thereof are predefined in a scene element class library, the classes can be inherited, and the attributes thereof can be dynamically expanded according to needs. Classes are defined as follows:
Figure DEST_PATH_IMAGE001
and (1.2) establishing an internal scene library of the autonomous unmanned system, wherein the internal scene library comprises behavior decision information, execution information of equipment such as a motion controller and the like, and internal state information of a control system and an execution system. For example, taking an unmanned vehicle system as an example, the internal scene of the unmanned vehicle system includes control commands of driving, transmission, steering, braking and the like, as well as an engine working mode, a lighting system state, a wiper system state, a brake system state and the like, and the brake system state further includes a brake pedal position signal, an ESC working state, an ABS working state, a TRC working state and a brake boosting system state.
(1.3) establishing autonomous unmanned system operation scene libraries, wherein each scene library comprises characteristic attribute characteristics of the scene, for example, the operation scene of an unmanned vehicle can be divided into an expressway, an urban traffic road, a rural road and the like, and the weather attribute of the operation scene is defined, for example, sunny days, rain, snow weather and the like. In addition, different operation scene libraries contain different scene libraries according to the scene characteristics.
(2) Establishing a topological structure semantic library of a scene, defining a position specifier and an orientation specifier, and describing a geometric position relation of scene elements relative to an autonomous unmanned system;
specifically, the vector (vector) X @ Y represents the spatial position and offset in meters, e.g., -2 @ 3 represents 2 meters to the left and 3 meters forward in the local coordinate system. The location specifiers are as follows:
Figure DEST_PATH_IMAGE002
heading (heading) represents a direction in space, and may be conveniently represented as a counterclockwise angle from true north on a 2D plane by a single angle. The location specifiers are as follows:
Figure DEST_PATH_IMAGE003
(3) establishing an abnormal event library of a scene, setting an abnormal event triggering condition, monitoring the occurrence of an abnormal event, and storing all information of perception data in a fixed time period before and after the occurrence of the abnormal event;
and (3.1) defining abnormal events, taking an unmanned vehicle driving scene as an example, wherein the abnormal events comprise collisions among scene elements, unrecognizable close-distance key elements, various sensor data collisions and the like. The sensor conflict refers to that information detected by different sensors is inconsistent, for example, no dynamic object passes through a picture shot by a camera, and an ultrasonic sensor detects that an object passes through the picture.
And (3.2) triggering full information recording of the perception information related to the abnormal event when the abnormal event occurs, namely storing and recording original information of the perception data in a fixed time period before and after the abnormal event occurs.
(4) The method comprises the steps of obtaining environment perception data of an autonomous unmanned system, extracting key static scene elements and dynamic scene elements in an external scene, defining through predefined classes, describing in a structured language mode, and recording corresponding internal scene information, and comprises the following steps:
and (4.1) acquiring various environment perception data of the autonomous unmanned system, including data acquired by various local sensors of the autonomous unmanned system, and data transmitted by other unmanned system equipment or Internet of things equipment through a wireless network (such as Internet of vehicles). The scene description system is accessed to sensor data through a sensor software module, and is accessed to other autonomous unmanned systems or sensing data transmitted by an Internet of things infrastructure through an Internet of things module.
(4.2) according to the environmental perception data obtained in the step (4.1), scene redundant information is removed in combination with result information of scene understanding, and key static scene elements and dynamic scene elements related to autonomous unmanned system action decision are extracted;
optionally, scene understanding of the acquired environmental perception data may be implemented by a system module of the autonomous unmanned system, or may be implemented by the scene description system; the scene description system provides a basic target identification function based on computer vision, and can fuse the speed measurement, distance measurement and direction measurement results of the ultrasonic sensor and the microwave radar to a target; further, the scene description system eliminates the scene redundant information irrelevant to the action decision of the autonomous unmanned system through a scene redundant information elimination module, for example, for an unmanned vehicle driving scene, building information on two sides of a road and scene information at a far distance can be eliminated.
(4.3) generating corresponding instances according to the object types (dynamic scene elements or static scene elements) of the key scene elements identified in the step (4.2) and describing the key scene elements; the pose and semantic relation between the scene elements and the ontology is described in a structured language mode, the visual reproduction characteristic is achieved, and the reproduction of key scene information related to unmanned system decision can be ensured according to the scene description language, and the method specifically comprises the following steps:
(4.3.1) an object example ego referring to the autonomous unmanned system ontology is created and used as a reference base point of a scene, the reference base point refers to a scene description and a starting point of scene restoration, and basic attributes of the ontology object include a global position (e.g., GPS position information), a line of sight, a traveling direction, a speed, three-dimensional information (height, width), and the like. For example, a simple ontology object EgoCar instance is created as follows:
Figure DEST_PATH_IMAGE004
and (4.3.2) establishing a body coordinate system, taking the autonomous unmanned system body as an origin, taking the body orientation as a coordinate axis direction, and simply and accurately describing the relative position of the scene element in the body coordinate system through various position descriptors. The representation of each position in the body coordinate system is shown in fig. 4.
And (4.3.3) extracting static scene elements from the environment perception data of the autonomous unmanned system, combining prior information provided by a map, realizing robust and accurate extraction of the static scene elements, and instantiating element instances in the scene by using a predefined scene library.
(4.3.4) extracting dynamic scene elements from the environment perception data of the autonomous unmanned system, describing the position and motion state of moving objects such as vehicles, pedestrians, etc. within the visual range or perception range, and the context constraint of the corresponding static scene elements.
On the basis of the above steps, the present invention provides an example of a scene description method for an autonomous unmanned system, as shown in fig. 3, the specific composition includes: external scene understanding, abnormal event detection and full recording, internal scene recording and a structured language description system. Has the following characteristics: 1) one-way writing and one-way reading characteristics; 2) normalized recordings of external and internal scenes (normalized recordings of internal and external scenes with time as the axis); 3) language recording and original video recording support to distinguish scenes.

Claims (6)

1. A method of scene description in an autonomous unmanned system, comprising the steps of:
(1) constructing an external scene library and an internal scene library of the autonomous unmanned system in an offline manner;
(2) establishing a topological structure semantic library of a scene, and describing the geometric position relation of scene elements relative to an autonomous unmanned system;
(3) establishing an abnormal event library of a scene, setting an abnormal event triggering condition, monitoring the occurrence of an abnormal event, and storing all information of perception data in a fixed time period before and after the occurrence of the abnormal event;
(4) the method comprises the steps of obtaining environment perception data of the autonomous unmanned system, extracting key static scene elements and dynamic scene elements in an external scene, defining through predefined classes, describing in a structured language mode, and recording corresponding internal scene information.
2. The method for describing scenes in an autonomous unmanned aerial system according to claim 1, wherein the step (1) comprises the steps of:
(1.1) establishing an external scene element category library of the autonomous unmanned system, wherein the external scene element category library comprises static scene elements and dynamic scene elements, and the static scene elements refer to static objects and attribute characteristic values thereof in the environment; the dynamic scene element refers to an object moving in the environment and an attribute characteristic value thereof;
(1.2) establishing an internal scene element class library of the autonomous unmanned system, wherein the internal scene element class library comprises execution information of equipment and internal state information of a control system and the execution system;
(1.3) establishing autonomous unmanned system operation scene libraries, wherein each scene library comprises the characteristic attribute characteristics of the scene and defines the weather attribute of the operation scene; the different operation scene libraries comprise different scene libraries according to the scene characteristics.
3. The method for describing scenes in an autonomous unmanned aerial system as claimed in claim 2, wherein the scene element class library is defined by classes by using an object-oriented idea, some classes and default attributes thereof are predefined in the scene element class library, and the classes can be inherited and the attributes thereof can be dynamically extended as required.
4. The method for describing scenes in an autonomous unmanned system according to claim 1, wherein the step (4) comprises the steps of:
(4.1) acquiring various environment perception data of the autonomous unmanned system, including data acquired by various local sensors of the autonomous unmanned system and data transmitted by other unmanned system equipment or Internet of things equipment through a wireless network;
(4.2) according to the environmental perception data obtained in the step (4.1), scene redundant information is removed in combination with result information of scene understanding, and key static scene elements and dynamic scene elements related to autonomous unmanned system action decision are extracted;
and (4.3) scene description is carried out in a structured language mode, the visual reproduction characteristic is achieved, and key scene information related to unmanned system decision can be reproduced according to the scene description language.
5. Method for scene description in an autonomous unmanned system according to claim 1, characterized in that said step (4.3) comprises in particular the steps of:
(4.3.1) creating an object ego referring to the autonomous unmanned system ontology and as a reference base point of the scene; the reference base point refers to a starting point of scene description and scene restoration, and the basic attributes of the body object comprise global position, sight distance, advancing direction, speed and three-dimensional information;
(4.3.2) establishing a body coordinate system, taking the autonomous unmanned system body as an origin, taking the body orientation as a coordinate axis direction, and describing the relative position of the scene element in the body coordinate system through various position descriptors;
(4.3.3) extracting static scene elements from the environment perception data of the autonomous unmanned system, combining prior information provided by a map, realizing robust and accurate extraction of the static scene elements, and instantiating element examples in a scene by using a predefined scene library;
(4.3.4) extracting dynamic scene elements from the context awareness data of the autonomous unmanned system, describing the position and motion state of the active objects within range or awareness, and the context constraints of the corresponding static scene elements.
6. The method for describing scenes in an autonomous unmanned aerial system of claim 1, wherein the step (4) is described in a structured language, and the structured scene description is composed of declarative or expanded scene element instance sentences, sentences declaring the relative spatial positions and offsets of scene elements, or other documents composed of these sentences; each sentence comprises an optional member list indented by a unit from the sentence itself, and the hierarchical structure of the structured scene description language and the position of each member in the hierarchy are strictly represented by the indentation.
CN202010061799.0A 2020-01-20 2020-01-20 Scene description method in autonomous unmanned system Active CN111243335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010061799.0A CN111243335B (en) 2020-01-20 2020-01-20 Scene description method in autonomous unmanned system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010061799.0A CN111243335B (en) 2020-01-20 2020-01-20 Scene description method in autonomous unmanned system

Publications (2)

Publication Number Publication Date
CN111243335A true CN111243335A (en) 2020-06-05
CN111243335B CN111243335B (en) 2023-03-24

Family

ID=70874727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010061799.0A Active CN111243335B (en) 2020-01-20 2020-01-20 Scene description method in autonomous unmanned system

Country Status (1)

Country Link
CN (1) CN111243335B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783225A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 Method and device for processing scenes in simulation system
CN111967124A (en) * 2020-06-30 2020-11-20 中汽数据有限公司 Generation method for universal amplification of intelligent automobile recombination scene
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
CN115393980A (en) * 2022-08-25 2022-11-25 长城汽车股份有限公司 Recording method and device for automobile data recorder, vehicle and storage medium
CN115587501A (en) * 2022-11-09 2023-01-10 工业和信息化部装备工业发展中心 Method and device for constructing scene library for testing intelligent networked automobile

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106802954A (en) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle
CN109034120A (en) * 2018-08-27 2018-12-18 合肥工业大学 Scene understanding method towards smart machine independent behaviour
CN109446371A (en) * 2018-11-09 2019-03-08 苏州清研精准汽车科技有限公司 A kind of intelligent automobile emulation testing scene library generating method and test macro and method
CN109993849A (en) * 2019-03-22 2019-07-09 山东省科学院自动化研究所 A kind of automatic Pilot test scene render analog method, apparatus and system
CN110210280A (en) * 2019-03-01 2019-09-06 北京纵目安驰智能科技有限公司 A kind of over the horizon cognitive method, system, terminal and storage medium
CN110675476A (en) * 2019-09-25 2020-01-10 武汉光庭信息技术股份有限公司 Method and device for visually conveying definition of automatic driving scene
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106802954A (en) * 2017-01-18 2017-06-06 中国科学院合肥物质科学研究院 Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle
CN109034120A (en) * 2018-08-27 2018-12-18 合肥工业大学 Scene understanding method towards smart machine independent behaviour
CN109446371A (en) * 2018-11-09 2019-03-08 苏州清研精准汽车科技有限公司 A kind of intelligent automobile emulation testing scene library generating method and test macro and method
CN110210280A (en) * 2019-03-01 2019-09-06 北京纵目安驰智能科技有限公司 A kind of over the horizon cognitive method, system, terminal and storage medium
CN109993849A (en) * 2019-03-22 2019-07-09 山东省科学院自动化研究所 A kind of automatic Pilot test scene render analog method, apparatus and system
CN110675476A (en) * 2019-09-25 2020-01-10 武汉光庭信息技术股份有限公司 Method and device for visually conveying definition of automatic driving scene
CN110688943A (en) * 2019-09-25 2020-01-14 武汉光庭信息技术股份有限公司 Method and device for automatically acquiring image sample based on actual driving data

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783225A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 Method and device for processing scenes in simulation system
CN111967124A (en) * 2020-06-30 2020-11-20 中汽数据有限公司 Generation method for universal amplification of intelligent automobile recombination scene
CN113947893A (en) * 2021-09-03 2022-01-18 网络通信与安全紫金山实验室 Method and system for restoring driving scene of automatic driving vehicle
CN115393980A (en) * 2022-08-25 2022-11-25 长城汽车股份有限公司 Recording method and device for automobile data recorder, vehicle and storage medium
CN115587501A (en) * 2022-11-09 2023-01-10 工业和信息化部装备工业发展中心 Method and device for constructing scene library for testing intelligent networked automobile

Also Published As

Publication number Publication date
CN111243335B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN111243335B (en) Scene description method in autonomous unmanned system
CN111919225B (en) Training, testing, and validating autonomous machines using a simulated environment
US10755007B2 (en) Mixed reality simulation system for testing vehicle control system designs
CN111582189B (en) Traffic signal lamp identification method and device, vehicle-mounted control terminal and motor vehicle
Dueholm et al. Trajectories and maneuvers of surrounding vehicles with panoramic camera arrays
KR102266996B1 (en) Method and apparatus for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
CN115104138A (en) Multi-modal, multi-technology vehicle signal detection
CN113032261B (en) Simulation test method and device
Beck et al. Automated vehicle data pipeline for accident reconstruction: New insights from LiDAR, camera, and radar data
WO2021202784A1 (en) Systems and methods for augmenting perception data with supplemental information
WO2022086739A2 (en) Systems and methods for camera-lidar fused object detection
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
JP2021082286A (en) System and method for improving lane change detection, and non-temporary computer-readable medium
Matsuda et al. A system for real-time on-street parking detection and visualization on an edge device
Kristoffersen et al. Towards semantic understanding of surrounding vehicular maneuvers: A panoramic vision-based framework for real-world highway studies
CN113189610A (en) Map-enhanced autonomous driving multi-target tracking method and related equipment
US20220172606A1 (en) Systems and Methods for Extracting Data From Autonomous Vehicles
Abdelhalim et al. Vt-lane: An exploratory study of an ad-hoc framework for real-time intersection turn count and trajectory reconstruction using nema phases-based virtual traffic lanes
WO2020073272A1 (en) Snapshot image to train an event detector
CN117274941B (en) Occupancy grid prediction method and device, intelligent equipment and storage medium
WO2020073271A1 (en) Snapshot image of traffic scenario
WO2020073270A1 (en) Snapshot image of traffic scenario
US20230084623A1 (en) Attentional sampling for long range detection in autonomous vehicles
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant