CN112734923B - Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium - Google Patents

Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium Download PDF

Info

Publication number
CN112734923B
CN112734923B CN202110061838.1A CN202110061838A CN112734923B CN 112734923 B CN112734923 B CN 112734923B CN 202110061838 A CN202110061838 A CN 202110061838A CN 112734923 B CN112734923 B CN 112734923B
Authority
CN
China
Prior art keywords
constructing
language information
target scene
dimensional
description language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110061838.1A
Other languages
Chinese (zh)
Other versions
CN112734923A (en
Inventor
肖猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Intelligent Control Beijing Technology Co Ltd
Original Assignee
Guoqi Intelligent Control Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Intelligent Control Beijing Technology Co Ltd filed Critical Guoqi Intelligent Control Beijing Technology Co Ltd
Priority to CN202110061838.1A priority Critical patent/CN112734923B/en
Publication of CN112734923A publication Critical patent/CN112734923A/en
Application granted granted Critical
Publication of CN112734923B publication Critical patent/CN112734923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an automatic driving three-dimensional virtual scene construction method, device and equipment and a storage medium. The method for constructing the three-dimensional virtual scene of the automatic driving comprises the following steps: acquiring description language information of a target scene; based on a preset automatic driving three-dimensional man-machine interaction engine, a rendering target scene is constructed according to the description language information. According to the embodiment of the application, the construction efficiency of the three-dimensional virtual scene can be improved.

Description

Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium
Technical Field
The application belongs to the technical field of automatic driving, and particularly relates to a three-dimensional virtual scene construction method and device for automatic driving, electronic equipment and a computer storage medium.
Background
With the vigorous development of autopilot technology, human-machine interaction (Human-Machine Interaction, HMI) is widely mentioned in the autopilot field.
Currently, a great deal of software development work is required for automatic driving three-dimensional virtual scene construction, which results in low efficiency of three-dimensional virtual scene construction.
Therefore, how to improve the efficiency of three-dimensional virtual scene construction is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an automatic driving three-dimensional virtual scene construction method, an automatic driving three-dimensional virtual scene construction device, electronic equipment and a computer storage medium, which can improve the three-dimensional virtual scene construction efficiency.
In a first aspect, an embodiment of the present application provides a method for constructing a three-dimensional virtual scene for autopilot, including:
acquiring description language information of a target scene;
based on a preset automatic driving three-dimensional man-machine interaction engine, a rendering target scene is constructed according to the description language information.
Optionally, the target scene includes a plurality of environmental elements, each environmental element being presented in a three-dimensional model.
Optionally, based on a preset autopilot three-dimensional man-machine interaction engine, constructing a rendering target scene according to the description language information, including:
based on the automatic driving three-dimensional man-machine interaction engine, determining environmental elements corresponding to the description language information according to the description language information;
based on the environmental elements, a rendering target scene is constructed.
Optionally, when the description language information is path planning display information, based on a preset automatic driving three-dimensional man-machine interaction engine, constructing a rendering target scene according to the description language information, including:
Based on the automatic driving three-dimensional man-machine interaction engine, a target planning path corresponding to the path planning display information is constructed and rendered according to the path planning display information.
Optionally, after constructing the rendering target scene, the method further comprises:
Receiving a display mode adjustment instruction;
and adjusting the display mode of the target scene according to the display mode adjusting instruction.
Optionally, the method further comprises:
Acquiring at least two paths of real-time video sources;
the display area size and location are dynamically specified based on at least two paths of real-time video sources.
Optionally, obtaining description language information of the target scene includes:
collecting voice information of a user;
And identifying the voice information of the user to obtain the descriptive language information.
In a second aspect, an embodiment of the present application provides a three-dimensional virtual scene construction apparatus for autopilot, including:
The acquisition module is used for acquiring the description language information of the target scene;
the construction rendering module is used for constructing a rendering target scene according to the description language information based on a preset automatic driving three-dimensional man-machine interaction engine.
Optionally, the target scene includes a plurality of environmental elements, each environmental element being presented in a three-dimensional model.
Optionally, a rendering module is constructed and used for determining environmental elements corresponding to the description language information according to the description language information based on the automatic driving three-dimensional man-machine interaction engine; based on the environmental elements, a rendering target scene is constructed.
Optionally, when the description language information is path planning display information, a rendering module is constructed and used for constructing and rendering a target planning path corresponding to the path planning display information according to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine.
Optionally, a rendering module is constructed and is further configured to receive a display mode adjustment instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction.
Optionally, the acquiring module is further configured to acquire at least two paths of real-time video sources; the display area size and location are dynamically specified based on at least two paths of real-time video sources.
Optionally, the acquisition module is used for acquiring voice information of the user; and identifying the voice information of the user to obtain the descriptive language information.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
The processor when executing the computer program instructions implements the method for constructing a three-dimensional virtual scene for autopilot as shown in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a three-dimensional virtual scene construction method for autopilot as shown in the first aspect.
According to the method and device for constructing the three-dimensional virtual scene of the automatic driving, the electronic equipment and the computer storage medium, the efficiency of constructing the three-dimensional virtual scene can be improved. According to the method for constructing the three-dimensional virtual scene of the autopilot, after the description language information of the target scene is acquired, the rendering target scene is constructed on the basis of a preset autopilot three-dimensional man-machine interaction engine according to the description language information. Therefore, the method utilizes the preset automatic driving three-dimensional man-machine interaction engine to automatically construct the rendering target scene according to the description language information in real time, and the construction efficiency of the three-dimensional virtual scene is improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present application, the drawings that are needed to be used in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic flow chart of an autopilot three-dimensional virtual scene construction method according to one embodiment of the present application;
FIG. 2 is a schematic structural diagram of an autopilot three-dimensional virtual scene building apparatus according to one embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problems in the prior art, the embodiment of the application provides a three-dimensional virtual scene construction method and device for automatic driving, electronic equipment and a computer storage medium. The method for constructing the three-dimensional virtual scene of the autopilot provided by the embodiment of the application is first described below.
Fig. 1 is a schematic flow chart of a three-dimensional virtual scene construction method for autopilot according to one embodiment of the present application. As shown in fig. 1, the method for constructing the three-dimensional virtual scene of the autopilot comprises the following steps:
S101, acquiring description language information of a target scene.
In one embodiment, the target scene includes a plurality of environmental elements, each of which is presented in a three-dimensional model. The environmental elements may include lane lines, surrounding vehicles (moving or prohibited), parking spaces, pedestrians, cyclists, travelable areas, and the like.
In one embodiment, obtaining descriptive language information of a target scene includes: collecting voice information of a user; and identifying the voice information of the user to obtain the descriptive language information. It can be seen that the embodiment can support secondary development of User Interface (UI) interaction and support multimode interaction in a voice mode.
The above-mentioned scene can be defined in a descriptive language, and the layout of the whole scene, and the type, state and predicted behavior of all elements in the scene can be described in a concise language mechanism.
For example, an HMI scenario for highway driving may be described as:
s102, constructing a rendering target scene based on a preset automatic driving three-dimensional man-machine interaction engine according to the description language information.
The preset automatic driving three-dimensional man-machine interaction engine can be realized by using Unity and Android technologies, and has the functions of inputting real-time data and synchronizing scene update.
In one embodiment, based on a preset autopilot three-dimensional human-computer interaction engine, constructing a rendering target scene according to description language information includes: based on the automatic driving three-dimensional man-machine interaction engine, determining environmental elements corresponding to the description language information according to the description language information; based on the environmental elements, a rendering target scene is constructed. It can be seen that the environmental elements can be changed synchronously according to the descriptive language information input in real time.
For example, the own vehicle and other vehicles can smoothly travel according to the real-time execution data. Moreover, in one embodiment, special effects of roads, vehicles, and pedestrians in different states may be supported.
In one embodiment, when the description language information is path planning display information, based on a preset autopilot three-dimensional man-machine interaction engine, a rendering target scene is constructed according to the description language information, including: based on the automatic driving three-dimensional man-machine interaction engine, a target planning path corresponding to the path planning display information is constructed and rendered according to the path planning display information. It can be seen that this embodiment may support the display of the path planning aspect of a typical autopilot function.
In one embodiment, after constructing the render target scene, the method further comprises: receiving a display mode adjustment instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction. It can be seen that this embodiment can support both day and night display modes, with the relevant ambient atmosphere having parameters that can be adjusted.
In one embodiment, the method further comprises: acquiring at least two paths of real-time video sources; the display area size and location are dynamically specified based on at least two paths of real-time video sources. It can be seen that this embodiment can support the introduction of at least two real-time video sources, and can dynamically specify the size and location of the display area.
According to the method, the preset automatic driving three-dimensional man-machine interaction engine is utilized to automatically construct the rendering target scene according to the description language information in real time, and the construction efficiency of the three-dimensional virtual scene is improved. Only a description mechanism provided by an automatic driving three-dimensional man-machine interaction engine is needed, and description is carried out according to an algorithm result, so that development difficulty is greatly simplified, and development progress is improved. Moreover, the scene description and the engine rendering can be directly decoupled and can be improved respectively.
Fig. 2 is a schematic structural diagram of an autopilot three-dimensional virtual scene construction apparatus according to an embodiment of the present application, and as shown in fig. 2, the autopilot three-dimensional virtual scene construction apparatus includes:
An acquisition module 201, configured to acquire description language information of a target scene;
The construction rendering module 202 is configured to construct a rendering target scene according to the description language information based on a preset autopilot three-dimensional man-machine interaction engine.
In one embodiment, the target scene includes a plurality of environmental elements, each of which is presented in a three-dimensional model.
In one embodiment, the rendering module 202 is configured to determine, based on the autopilot three-dimensional human-machine interaction engine, an environmental element corresponding to the descriptive language information according to the descriptive language information; based on the environmental elements, a rendering target scene is constructed.
In one embodiment, when the description language information is path planning display information, the rendering module 202 is configured to construct a target planning path corresponding to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine according to the path planning display information.
In one embodiment, the construction rendering module 202 is further configured to receive a display mode adjustment instruction; and adjusting the display mode of the target scene according to the display mode adjusting instruction.
In one embodiment, the obtaining module 201 is further configured to obtain at least two paths of real-time video sources; the display area size and location are dynamically specified based on at least two paths of real-time video sources.
In one embodiment, the obtaining module 201 is configured to collect voice information of a user; and identifying the voice information of the user to obtain the descriptive language information.
Each module/unit in the apparatus shown in fig. 2 has a function of implementing each step in fig. 1, and can achieve a corresponding technical effect, which is not described herein for brevity.
Fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may comprise a processor 301 and a memory 302 storing computer program instructions.
In particular, the processor 301 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may comprise a hard disk drive (HARD DISK DRIVE, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of the foregoing. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 302 may be a non-volatile solid state memory.
In one example, memory 302 may be Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement any one of the automatic driving three-dimensional virtual scene construction methods of the above embodiments.
In one example, the electronic device may also include a communication interface 303 and a bus 310. As shown in fig. 3, the processor 301, the memory 302, and the communication interface 303 are connected to each other by a bus 310 and perform communication with each other.
The communication interface 303 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiment of the present application.
Bus 310 includes hardware, software, or both that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 310 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, embodiments of the present application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the autopilot three-dimensional virtual scene construction methods of the above embodiments.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present application are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (9)

1. The method for constructing the three-dimensional virtual scene of the automatic driving is characterized by comprising the following steps of:
acquiring description language information of a target scene;
based on a preset automatic driving three-dimensional man-machine interaction engine, constructing and rendering the target scene according to the description language information;
The target scene comprises a plurality of environment elements, each environment element is presented in a three-dimensional model mode, the environment elements synchronously change according to the description language information input in real time, and the target scene can support special effects of roads, vehicles and pedestrians in different states.
2. The method for constructing an autopilot three-dimensional virtual scene according to claim 1, wherein the constructing and rendering the target scene based on the preset autopilot three-dimensional man-machine interaction engine according to the description language information comprises:
based on the automatic driving three-dimensional man-machine interaction engine, determining an environment element corresponding to the description language information according to the description language information;
And constructing and rendering the target scene based on the environment elements.
3. The method for constructing an autopilot three-dimensional virtual scene according to claim 1, wherein when the descriptive language information is path planning display information, the method for constructing and rendering the target scene based on the preset autopilot three-dimensional man-machine interaction engine according to the descriptive language information comprises:
And constructing and rendering a target planning path corresponding to the path planning display information according to the path planning display information based on the automatic driving three-dimensional man-machine interaction engine.
4. The method of automatically driving three-dimensional virtual scene construction according to claim 1, wherein after said constructing renders the target scene, the method further comprises:
Receiving a display mode adjustment instruction;
and adjusting the display mode of the target scene according to the display mode adjusting instruction.
5. The method of constructing a three-dimensional virtual scene for autopilot of claim 1 wherein the method further comprises:
Acquiring at least two paths of real-time video sources;
and dynamically designating the size and the position of the display area based on the at least two paths of real-time video sources.
6. The method for constructing a three-dimensional virtual scene for autopilot according to any one of claims 1 to 5, wherein said acquiring descriptive language information of a target scene includes:
collecting voice information of a user;
And recognizing the user voice information to obtain the description language information.
7. An automatic driving three-dimensional virtual scene construction device, comprising:
The acquisition module is used for acquiring the description language information of the target scene;
the construction rendering module is used for constructing and rendering the target scene based on a preset automatic driving three-dimensional man-machine interaction engine according to the description language information, the target scene comprises a plurality of environment elements, each environment element is presented in a three-dimensional model mode, the environment elements synchronously change according to the description language information input in real time, and the target scene can support special effects of roads, vehicles and pedestrians in different states.
8. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements the method for constructing a three-dimensional virtual scene for autopilot according to any one of claims 1-6.
9. A computer storage medium having stored thereon computer program instructions which when executed by a processor implement the method of three-dimensional virtual scene construction for autopilot of any one of claims 1 to 6.
CN202110061838.1A 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium Active CN112734923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110061838.1A CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110061838.1A CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112734923A CN112734923A (en) 2021-04-30
CN112734923B true CN112734923B (en) 2024-05-24

Family

ID=75592068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110061838.1A Active CN112734923B (en) 2021-01-18 2021-01-18 Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112734923B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690286B (en) * 2022-10-19 2023-08-29 珠海云洲智能科技股份有限公司 Three-dimensional terrain generation method, terminal device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109685904A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Virtual driving modeling method and system based on virtual reality
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
US10665030B1 (en) * 2019-01-14 2020-05-26 Adobe Inc. Visualizing natural language through 3D scenes in augmented reality

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8072470B2 (en) * 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US11397462B2 (en) * 2012-09-28 2022-07-26 Sri International Real-time human-machine collaboration using big data driven augmented reality technologies
US20170154468A1 (en) * 2015-12-01 2017-06-01 Le Holdings (Beijing) Co., Ltd. Method and electronic apparatus for constructing virtual reality scene model
US20200074230A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622524A (en) * 2017-09-29 2018-01-23 百度在线网络技术(北京)有限公司 Display methods and display device for mobile terminal
CN109685904A (en) * 2017-10-18 2019-04-26 深圳市掌网科技股份有限公司 Virtual driving modeling method and system based on virtual reality
US10665030B1 (en) * 2019-01-14 2020-05-26 Adobe Inc. Visualizing natural language through 3D scenes in augmented reality
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A new framework for automatic 3D scene construction from text description";Jiajie Lu等;《2010 IEEE International Conference on Progress in Informatics and Computing》;第964-968页 *
"Efficient Multi-Person Hierarchical 3D Pose Estimation for Autonomous Driving";Renshu Gu等;《2019 IEEE Conference on Multimedia Information Processing and Retrieval》;20191231;第163-168页 *
"交通场景的自动三维建模技术研究";王贤隆;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-1444 *
"基于多模态融合的自动驾驶感知及计算";张燕咏等;《计算机研究与发展》;20200901;第1781-1799页 *

Also Published As

Publication number Publication date
CN112734923A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN113642633B (en) Method, device, equipment and medium for classifying driving scene data
EP3627471B1 (en) Method and device for assisting in controlling automatic driving of vehicle, and system
CN112668153A (en) Method, device and equipment for generating automatic driving simulation scene
US10809723B2 (en) Method and apparatus for generating information
CN112396093B (en) Driving scene classification method, device and equipment and readable storage medium
CN108961990B (en) Method and apparatus for processing high-precision map
CN112985432B (en) Vehicle navigation method, device, electronic equipment and storage medium
CN112595337B (en) Obstacle avoidance path planning method and device, electronic device, vehicle and storage medium
CN101469992B (en) Processing method and apparatus for vehicle navigation and vehicle navigation system
CN112650224A (en) Method, device, equipment and storage medium for automatic driving simulation
CN113158349A (en) Vehicle lane change simulation method and device, electronic equipment and storage medium
CN113177993B (en) Method and system for generating high-precision map in simulation environment
CN112734923B (en) Automatic driving three-dimensional virtual scene construction method, device, equipment and storage medium
CN111401255B (en) Method and device for identifying bifurcation junctions
CN115482699A (en) Virtual driving video teaching method, system, storage medium and equipment
CN112613469B (en) Target object motion control method and related equipment
CN110576790A (en) Information prompting method and system based on automobile rear window glass display screen and vehicle-mounted terminal
US20210295683A1 (en) Traffic safety control method and vehicle-mounted device
CN108204818B (en) Method and device for determining straight-going relation between roads and hybrid navigation system
CN112699043A (en) Method and device for generating test case
CN114136305B (en) Multi-layer map creation method, system, equipment and computer readable storage medium
JP2009093425A (en) Traffic stream simulation method and device
TWI762887B (en) Traffic safety control method, vehicle-mounted device and readable storage medium
CN112684720A (en) Simulation test method and device
CN116255990A (en) Vehicle navigation method, device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant