CN116092039B - Display control method and device of automatic driving simulation system - Google Patents

Display control method and device of automatic driving simulation system Download PDF

Info

Publication number
CN116092039B
CN116092039B CN202310369708.3A CN202310369708A CN116092039B CN 116092039 B CN116092039 B CN 116092039B CN 202310369708 A CN202310369708 A CN 202310369708A CN 116092039 B CN116092039 B CN 116092039B
Authority
CN
China
Prior art keywords
vehicle
information
determining
detected
tested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310369708.3A
Other languages
Chinese (zh)
Other versions
CN116092039A (en
Inventor
刘金
贾双成
朱磊
万如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202310369708.3A priority Critical patent/CN116092039B/en
Publication of CN116092039A publication Critical patent/CN116092039A/en
Application granted granted Critical
Publication of CN116092039B publication Critical patent/CN116092039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a display control method and device of an automatic driving simulation system. The method comprises the steps of acquiring picture information containing a vehicle to be detected, collected by a vehicle-mounted camera mounted on a target vehicle, inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected; then determining the head orientation of the vehicle to be tested based on a preset rule; and finally, determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction. According to the method and the device, the display model types of surrounding vehicles are determined through image semantic segmentation and edge detection, the head orientation is determined based on the road driving rule information provided by the high-precision map, the transverse and longitudinal back-and-forth switching is avoided, and the display effect of the automatic driving simulation system is improved.

Description

Display control method and device of automatic driving simulation system
Technical Field
The application relates to the technical field of automatic driving, in particular to a display control method and device of an automatic driving simulation system.
Background
With the development of automatic driving technology and the advancement of digital cities, the attention of governments to intelligent transportation continues to rise. In order to better show technologies such as automatic driving and vehicle-road coordination to government related researchers, development of related automatic driving simulation systems is increasingly paid attention to various large automatic driving companies. The simulation platform for real-time traffic display is an important ring, namely: how to intuitively display the running states of road networks and surrounding vehicles (such as model types of vehicles, head orientations and speeds of models, and the like) through a simulation platform has become a problem to be solved.
In the related art, an automatic driving simulation system can automatically update model types and model head orientations according to vehicle types, but the current algorithm cannot accurately identify head orientations of vehicles around the vehicle, sometimes judges that the vehicle is parallel to the vehicle, sometimes judges that the vehicle is perpendicular to the direction of the vehicle, so that the model direction can be switched back and forth in the transverse direction and the longitudinal direction occasionally, and the display effect of the automatic driving simulation system is affected.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a display control method and a display control device for an automatic driving simulation system, which can improve the display effect of the automatic driving simulation system.
A first aspect of the present application provides a display control method of an autopilot simulation system, including:
obtaining picture information containing a vehicle to be detected, wherein the picture information is acquired through a vehicle-mounted camera arranged on a target vehicle;
inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected;
determining the head orientation of the vehicle to be tested based on a preset rule;
and determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction.
A second aspect of the present application provides a display control apparatus of an automatic driving simulation system, including:
the first processing module is used for obtaining picture information containing a vehicle to be detected, wherein the picture information is acquired through a vehicle-mounted camera arranged on a target vehicle;
the second processing module is used for inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and the wheel edge information of the vehicle to be detected;
the third processing module is used for determining the head orientation of the vehicle to be tested based on a preset rule;
and the fourth processing module is used for determining the model type of the vehicle to be tested according to the model type and performing display control on the vehicle to be tested according to the head direction.
A third aspect of the present application provides an electronic device, comprising:
a processor; and
and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the display control method of the autopilot simulation system as described above.
A fourth aspect of the present application provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform a display control method of an autopilot simulation system as described above.
The technical scheme that this application provided can include following beneficial effect: the method comprises the steps of acquiring picture information containing a vehicle to be detected, collected by a vehicle-mounted camera mounted on a target vehicle, inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected; then determining the head orientation of the vehicle to be tested based on a preset rule; and finally, determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction. According to the technical scheme, the display model types of surrounding vehicles are determined through image semantic segmentation and edge detection, the head orientation is determined based on the road driving rule information provided by the high-precision map, the transverse and longitudinal back-and-forth switching is avoided, and the display effect of the automatic driving simulation system is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a flow chart of a display control method of an autopilot simulation system shown in an embodiment of the present application;
FIG. 2 is another flow chart of a display control method of an autopilot simulation system shown in an embodiment of the present application;
FIG. 3 is a schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 4 is another schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 5 is another schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 6 is another schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 7 is another schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 8 is another schematic illustration of a display of a vehicle under test around a target vehicle in an autopilot simulation system shown in an embodiment of the present application;
FIG. 9 is a schematic diagram of an autopilot simulation system display control interface shown in an embodiment of the present application;
fig. 10 is a schematic structural view of a display control device of the automated driving simulation system shown in the embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Aiming at the problems in the related art, the embodiment of the application provides a display control method and a display control device for an automatic driving simulation system, wherein the information of surrounding vehicles and drivers is identified through a vehicle-mounted camera, and the display model and the head orientation of the surrounding vehicles are determined through semantic segmentation, edge detection and road driving rule information provided by a high-precision map, so that the display effect of the automatic driving simulation system can be improved.
It is emphasized that the limitation of the present application is that the driving vehicle of the surrounding vehicle in the case of satisfying the traffic rule, temporarily does not consider the special case of strongly violating the traffic rule, such as high-speed reverse running, or the like.
The following describes the technical scheme of the embodiments of the present application in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a display control method of an autopilot simulation system according to an embodiment of the present application.
Referring to fig. 1, an embodiment of the present application provides a display control method of an autopilot simulation system, including the following steps:
s11: and obtaining picture information containing the vehicle to be tested, wherein the picture information is acquired through a vehicle-mounted camera installed on the target vehicle.
The image information may include a plurality of vehicles to be tested, and the model types and the head orientations of the vehicles to be tested need to be determined. In the normal running process of the vehicle, the vehicle-mounted cameras arranged around the vehicle body collect the picture information of surrounding vehicles, and meanwhile, the picture information collected by the vehicle-mounted cameras at the front, back, left and right needs to be recorded, and in a specific embodiment, the picture information containing the vehicle to be tested needs to be used.
S12: inputting the picture information into an image semantic segmentation model, carrying out image recognition on the picture information, and determining the model type and the wheel edge information of the vehicle to be detected.
The image semantic segmentation model can directly realize the identification of the picture information, only training is performed by utilizing pictures marked in advance (a large number of pictures with marking information and types of picture contents are marked), in practical application, only the obtained picture information is input into the image semantic segmentation model, the model can output what type of the vehicle to be detected is contained in the picture information, for example, a household car is identified, and then the model displayed in the automatic driving simulation system is displayed as the household car. Meanwhile, the image semantic segmentation model can also identify the wheels of the vehicle to be detected, and further edge detection is utilized to obtain the wheel edge information.
S13: and determining the head orientation of the vehicle to be tested based on a preset rule.
The determining the head orientation of the vehicle to be tested based on the preset rule comprises determining the head orientation of the vehicle to be tested based on the road driving rule information according to the wheel edge information. The road driving rule information is provided by a high-precision map, the wheel edge information is the edge information of the wheels of the vehicle to be detected by utilizing edge detection, the wheels of the vehicle to be detected can be subjected to ellipse fitting according to a set fitting rule by utilizing edge pixel coordinates, the set fitting rule can be used for acquiring partial data of the outer surface of the wheels of the vehicle to be detected, whether the outer side of the vehicle or the side of the wheels contacting the ground is determined by utilizing luminosity errors, and the outer surface of the wheels is cut out by utilizing luminosity errors so as to extract the edge information of the wheels of the vehicle to be detected.
Specifically, if the driving lane mode of the vehicle to be tested is the same as that of the target vehicle, the head direction of the vehicle to be tested is parallel to that of the target vehicle; if the lane driving rules are different from those of the target vehicle, the head direction of the vehicle to be tested is perpendicular to the head direction of the target vehicle, and particularly, the head direction of the vehicle to be tested can be determined according to the image information acquired by the vehicle-mounted camera in which direction.
S14: and determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction.
According to the method and the device, the display model types of surrounding vehicles are determined through image semantic segmentation and edge detection, the head orientation is determined based on road driving rule information provided by a high-precision map, the model types and the head orientation displayed in an automatic driving simulation system are determined, and the vehicles to be tested are displayed and controlled according to the model types and the head orientation.
The embodiment of the application provides a display control method of an automatic driving simulation system, which comprises the steps of acquiring picture information, including a vehicle to be detected, acquired by a vehicle-mounted camera mounted on a target vehicle, inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected; then determining the head orientation of the vehicle to be tested based on a preset rule; and finally, determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction. According to the method and the device for displaying the vehicle, the display model types of surrounding vehicles are determined through image semantic segmentation and edge detection, the head orientation is determined based on the road driving rule information provided by the high-precision map, the transverse and longitudinal back-and-forth switching is avoided, and the display effect of the automatic driving simulation system is improved.
Fig. 2 is a flow chart of a display control method of an autopilot simulation system according to an embodiment of the present application.
Referring to fig. 2, an embodiment of the present application provides a display control method of an autopilot simulation system, which specifically includes the following steps:
s21: and obtaining picture information containing the vehicle to be tested, wherein the picture information is acquired through a vehicle-mounted camera installed on the target vehicle.
The image information may include a plurality of vehicles to be tested, and the model types and the head orientations of the vehicles to be tested need to be determined. In the normal running process of the vehicle, the vehicle-mounted cameras arranged around the vehicle body collect the picture information of surrounding vehicles, and meanwhile, the picture information collected by the vehicle-mounted cameras at the front, back, left and right needs to be recorded, and in a specific embodiment, the picture information containing the vehicle to be tested needs to be used.
S22: inputting the picture information into an image semantic segmentation model, carrying out image recognition on the picture information, and determining the model type and the wheel edge information of the vehicle to be detected.
The image semantic segmentation model can directly realize the identification of the picture information, only training is performed by utilizing pictures marked in advance (a large number of pictures with marking information and types of picture contents are marked), in practical application, only the obtained picture information is input into the image semantic segmentation model, the model can output what type of the vehicle to be detected is contained in the picture information, for example, a household car is identified, and then the model displayed in the automatic driving simulation system is displayed as the household car. Meanwhile, the image semantic segmentation model can also identify the wheels of the vehicle to be detected, and further edge detection is utilized to obtain the wheel edge information.
It should be noted that the model categories displayed in the autopilot simulation system may include, but are not limited to, cars, SUVs, trucks, non-automobiles, and the like.
S23: determining whether the wheels of the vehicle to be tested can be fitted into ellipses according to the wheel edge information, and if so, executing a step S24; if not, step S25 is performed.
The road driving rule information is provided by a high-precision map, the wheel edge information is edge information of wheels of the vehicle to be detected by utilizing edge detection, the wheels of the vehicle to be detected can be subjected to ellipse fitting according to a set fitting rule by utilizing edge pixel coordinates, the set fitting rule can be used for acquiring partial data of the outer surfaces of the wheels of the vehicle to be detected, whether the outer sides of the vehicle or the wheels are contacted with the ground is determined by utilizing luminosity errors, and the outer surfaces of the wheels are cut out by utilizing luminosity errors so as to extract the edge information of the wheels of the vehicle to be detected.
S24: calculating the ellipticity of the wheels of the vehicle to be measured according to the wheel edge information, and determining the head orientation of the vehicle to be measured comprises:
under the condition that the picture information is acquired by the left vehicle-mounted camera or the right vehicle-mounted camera, the head orientation of the vehicle to be detected is determined to be parallel to the head orientation of the target vehicle based on the road driving rule information. As shown in fig. 3, if the picture information is acquired by the right vehicle-mounted camera, it may be determined that the head direction of the vehicle 1 to be measured is parallel and in the same direction as the head direction of the target vehicle based on the road driving rule information. If the image information is acquired by the left vehicle-mounted camera, the head direction of the vehicle 2 to be detected can be determined to be parallel and opposite to the head direction of the target vehicle based on the road driving rule information. Fig. 3 only shows the state shown in fig. 4 in the case where the vehicle under test 1 and the vehicle under test 2 are in the same position as the target vehicle.
Under the condition that the picture information is acquired by front and rear vehicle-mounted cameras, the head orientation of the vehicle to be detected is determined according to the ellipticity based on road driving rule information.
In some embodiments, the determining, based on the road driving rule information, a head orientation of the vehicle to be tested according to the ellipticity includes:
judging whether the ellipticity is larger than or equal to a preset threshold value;
when the ellipticity is larger than or equal to a preset threshold value, determining that the head orientation of the vehicle to be tested is parallel to the head orientation of the target vehicle based on the road driving rule information;
and when the ellipticity is smaller than a preset threshold value, determining that the head orientation of the vehicle to be tested is perpendicular to the head orientation of the target vehicle based on the road driving rule information.
As shown in fig. 5, if the picture information is acquired by the front vehicle-mounted camera, when the ellipticity is greater than or equal to the preset threshold value, the head orientation of the vehicle 1 to be measured is determined to be parallel to the head orientation of the target vehicle based on the road driving rule information. As shown in fig. 6, if the picture information is acquired by the front vehicle-mounted camera, when the ellipticity is smaller than the preset threshold, the head orientation of the vehicle 1 to be measured is determined to be perpendicular to the head orientation of the target vehicle based on the road driving rule information.
S25: and determining the head orientation of the vehicle to be tested based on the road driving rule information.
In some embodiments, where the driving lane is a one-way road:
if the picture information is acquired by the front vehicle-mounted camera or the rear vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is parallel and identical to the head orientation of the target vehicle;
if the picture information is acquired by the left vehicle-mounted camera or the right vehicle-mounted camera, the following steps are performed:
if the wheel information of the vehicle to be detected can be acquired, determining that the head direction of the vehicle to be detected is parallel to the head direction of the target vehicle;
if the wheel information of the vehicle to be detected cannot be acquired, determining that the head direction of the vehicle to be detected is perpendicular to the head direction of the target vehicle.
When the driving lane is a one-way road, as shown in fig. 7, if the picture information is acquired by the front vehicle-mounted camera or the rear vehicle-mounted camera, it is determined that the head orientation of the vehicle 1 to be measured is parallel and identical to the head orientation of the target vehicle. As shown in fig. 7, if the picture information is acquired by the right vehicle-mounted camera, if the wheel information of the vehicle 2 to be measured cannot be acquired, it is determined that the head direction of the vehicle 2 to be measured is perpendicular to the head direction of the target vehicle. As shown in fig. 7, if the picture information is acquired by the right vehicle-mounted camera, if the wheel information of the vehicle 3 to be measured can be acquired, it is determined that the head direction of the vehicle 3 to be measured is parallel to the head direction of the target vehicle.
In some embodiments, where the driving lane is a non-single lane, the lane is a lane:
if the picture information is acquired by the front vehicle-mounted camera or the rear vehicle-mounted camera, determining that the head direction of the vehicle to be detected is parallel to the head direction of the target vehicle;
if the picture information is acquired by the left vehicle-mounted camera or the right vehicle-mounted camera, the following steps are performed:
if the wheel information of the vehicle to be detected can be acquired, determining that the head direction of the vehicle to be detected is parallel to the head direction of the target vehicle;
if the wheel information of the vehicle to be detected cannot be acquired, determining that the head direction of the vehicle to be detected is perpendicular to the head direction of the target vehicle.
When the driving lane is a non-one-way lane, as shown in fig. 8, if the picture information is acquired by the front vehicle-mounted camera or the rear vehicle-mounted camera, it is determined that the head orientation of the vehicle 1 to be measured is parallel to the head orientation of the target vehicle. As shown in fig. 8, if the picture information is acquired by the right vehicle-mounted camera, if the wheel information of the vehicle 2 to be measured cannot be acquired, it is determined that the head direction of the vehicle 2 to be measured is perpendicular to the head direction of the target vehicle. As shown in fig. 8, if the picture information is acquired by the right vehicle-mounted camera, if the wheel information of the vehicle 3 to be measured can be acquired, it is determined that the head direction of the vehicle 3 to be measured is parallel to the head direction of the target vehicle.
S26: and determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction.
In some embodiments, the information of surrounding vehicles and drivers is identified through the vehicle-mounted camera, the display model type and the head orientation of the surrounding vehicles are determined through semantic segmentation, edge detection and road driving rule information provided by the high-precision map, after the model type and the head orientation displayed in the automatic driving simulation system are determined, the vehicle to be tested is displayed and controlled according to the model type and the head orientation, and the specific overall display can be seen in fig. 9.
The embodiment of the application provides a display control method of an automatic driving simulation system, which comprises the steps of acquiring picture information, including a vehicle to be detected, acquired by a vehicle-mounted camera mounted on a target vehicle, inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected; then, based on road driving rule information, determining the head direction of the vehicle to be tested according to the wheel edge information; and finally, determining the model type of the vehicle to be tested according to the model type, and performing display control on the vehicle to be tested according to the head direction. According to the method and the device for displaying the vehicle, the display model types of surrounding vehicles are determined through image semantic segmentation and edge detection, the head orientation is determined based on the road driving rule information provided by the high-precision map, the transverse and longitudinal back-and-forth switching is avoided, and the display effect of the automatic driving simulation system is improved.
Corresponding to the embodiment of the application function implementation method, the application also provides a display control device of the automatic driving simulation system, electronic equipment and corresponding embodiments.
Fig. 10 is a schematic structural diagram of a display control device of the automated driving simulation system shown in the embodiment of the present application.
Referring to fig. 10, an embodiment of the present application provides a display control device of an autopilot simulation system, which specifically includes: a first processing module 101, a second processing module 102, a third processing module 103, and a fourth processing module 104, wherein:
a first processing module 101, configured to obtain picture information including a vehicle to be tested, where the picture information is collected by a vehicle-mounted camera installed on a target vehicle;
the second processing module 102 is configured to input the picture information into an image semantic segmentation model, perform image recognition and edge detection on the picture information, and determine a model type and wheel edge information of the vehicle to be detected;
a third processing module 103, configured to determine a head orientation of the vehicle to be tested based on a preset rule;
and the fourth processing module 104 is configured to determine a model class of the vehicle to be tested according to the model type, and display and control the vehicle to be tested according to the head direction.
In a specific embodiment, the third processing module 103 is specifically configured to:
and determining the head orientation of the vehicle to be tested according to the wheel edge information based on the road driving rule information.
In a specific embodiment, the third processing module 103 is specifically configured to:
determining whether the wheels of the vehicle to be tested can be fitted into ellipses according to the wheel edge information;
if the wheels of the vehicle to be tested can be fitted into ellipses, calculating the ellipticity of the wheels of the vehicle to be tested according to the wheel edge information, and determining the head orientation of the vehicle to be tested comprises:
under the condition that the picture information is acquired by a left vehicle-mounted camera or a right vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is parallel to the head orientation of the target vehicle based on the road driving rule information;
and under the condition that the picture information is acquired by front and rear vehicle-mounted cameras, determining the head orientation of the vehicle to be detected according to the ellipticity based on the road driving rule information.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 11 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 11, an electronic device 1100 includes a memory 1110 and a processor 1120.
The processor 1120 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 1110 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 1120 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 1110 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some embodiments, memory 1110 may include readable and/or writable removable storage devices such as Compact Discs (CDs), digital versatile discs (e.g., DVD-ROMs, dual-layer DVD-ROMs), blu-ray discs read only, super-density discs, flash memory cards (e.g., SD cards, min SD cards, micro-SD cards, etc.), magnetic floppy disks, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 1110 has stored thereon executable code that, when processed by the processor 1120, can cause the processor 1120 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. Those skilled in the art will also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined and pruned according to actual needs, and the modules in the apparatus of the embodiment of the present application may be combined, divided and pruned according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing part or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform some or all of the steps of the above-described methods according to the present application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the application herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. A display control method of an automatic driving simulation system, comprising:
obtaining picture information containing a vehicle to be detected, wherein the picture information is acquired through a vehicle-mounted camera arranged on a target vehicle;
inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and wheel edge information of the vehicle to be detected;
determining the head direction of the vehicle to be tested according to the wheel edge information based on road running rule information, wherein the road running rule information is provided by a high-precision map and is used for representing that the vehicle to be tested meets traffic rules;
determining model types of the vehicle to be tested according to the model types, and performing display control on the vehicle to be tested according to the head direction;
the determining, based on the road driving rule information, the head orientation of the vehicle to be tested according to the wheel edge information includes:
determining whether the wheels of the vehicle to be tested can be fitted into ellipses according to the wheel edge information;
if the wheels of the vehicle to be tested can be fitted into ellipses, calculating the ellipticity of the wheels of the vehicle to be tested according to the wheel edge information, and determining the head orientation of the vehicle to be tested comprises:
under the condition that the picture information is acquired by a left vehicle-mounted camera or a right vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is parallel to the head orientation of the target vehicle based on the road driving rule information;
and under the condition that the picture information is acquired by front and rear vehicle-mounted cameras, determining the head orientation of the vehicle to be detected according to the ellipticity based on the road driving rule information.
2. The display control method according to claim 1, characterized in that the determining the head orientation of the vehicle to be measured from the ellipticity based on the road running rule information includes:
judging whether the ellipticity is larger than or equal to a preset threshold value;
when the ellipticity is larger than or equal to the preset threshold value, determining that the head orientation of the vehicle to be tested is parallel to the head orientation of the target vehicle based on the road driving rule information;
and when the ellipticity is smaller than the preset threshold value, determining that the head orientation of the vehicle to be tested is perpendicular to the head orientation of the target vehicle based on the road driving rule information.
3. The display control method according to claim 1, characterized by further comprising:
and if the wheels of the vehicle to be tested cannot be fitted into an ellipse, determining the head orientation of the vehicle to be tested based on the road driving rule information.
4. A display control method according to claim 3, wherein in the case where the traveling lane is a one-way road:
if the picture information is acquired by a front vehicle-mounted camera or a rear vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is the same as the head orientation of the target vehicle;
if the picture information is acquired by the left vehicle-mounted camera or the right vehicle-mounted camera, the picture information is acquired by the left vehicle-mounted camera or the right vehicle-mounted camera:
if the wheel information of the vehicle to be detected can be acquired, determining that the head direction of the vehicle to be detected is parallel to the head direction of the target vehicle;
if the wheel information of the vehicle to be detected cannot be acquired, determining that the head direction of the vehicle to be detected is perpendicular to the head direction of the target vehicle.
5. A display control method according to claim 3, wherein in the case where the traveling lane is a non-one-way road:
if the picture information is acquired by a front vehicle-mounted camera or a rear vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is parallel to the head orientation of the target vehicle;
if the picture information is acquired by a left vehicle-mounted camera or a vehicle-mounted camera, the picture information is acquired by the left vehicle-mounted camera or the vehicle-mounted camera:
if the wheel information of the vehicle to be detected can be acquired, determining that the head direction of the vehicle to be detected is parallel to the head direction of the target vehicle;
if the wheel information of the vehicle to be detected cannot be acquired, determining that the head direction of the vehicle to be detected is perpendicular to the head direction of the target vehicle.
6. A display control apparatus of an automatic driving simulation system, comprising:
the first processing module is used for obtaining picture information containing a vehicle to be detected, wherein the picture information is acquired through a vehicle-mounted camera arranged on a target vehicle;
the second processing module is used for inputting the picture information into an image semantic segmentation model, carrying out image recognition and edge detection on the picture information, and determining the model type and the wheel edge information of the vehicle to be detected;
the third processing module is used for determining the head direction of the vehicle to be tested according to the wheel edge information based on road running rule information, wherein the road running rule information is provided by a high-precision map and is used for representing that the vehicle to be tested meets traffic rules;
the fourth processing module is used for determining the model type of the vehicle to be tested according to the model type and performing display control on the vehicle to be tested according to the head direction;
the third processing module is specifically configured to:
determining whether the wheels of the vehicle to be tested can be fitted into ellipses according to the wheel edge information;
if the wheels of the vehicle to be tested can be fitted into ellipses, calculating the ellipticity of the wheels of the vehicle to be tested according to the wheel edge information, and determining the head orientation of the vehicle to be tested comprises:
under the condition that the picture information is acquired by a left vehicle-mounted camera or a right vehicle-mounted camera, determining that the head orientation of the vehicle to be detected is parallel to the head orientation of the target vehicle based on the road driving rule information;
and under the condition that the picture information is acquired by front and rear vehicle-mounted cameras, determining the head orientation of the vehicle to be detected according to the ellipticity based on the road driving rule information.
7. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the display control method of the autopilot simulation system of any one of claims 1-5.
8. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the display control method of the autopilot simulation system of any one of claims 1-5.
CN202310369708.3A 2023-04-10 2023-04-10 Display control method and device of automatic driving simulation system Active CN116092039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310369708.3A CN116092039B (en) 2023-04-10 2023-04-10 Display control method and device of automatic driving simulation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310369708.3A CN116092039B (en) 2023-04-10 2023-04-10 Display control method and device of automatic driving simulation system

Publications (2)

Publication Number Publication Date
CN116092039A CN116092039A (en) 2023-05-09
CN116092039B true CN116092039B (en) 2023-06-16

Family

ID=86204918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310369708.3A Active CN116092039B (en) 2023-04-10 2023-04-10 Display control method and device of automatic driving simulation system

Country Status (1)

Country Link
CN (1) CN116092039B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005096647A (en) * 2003-09-25 2005-04-14 Auto Network Gijutsu Kenkyusho:Kk On-vehicle display device
CN112071092A (en) * 2020-09-21 2020-12-11 汪秒 Road traffic intelligent mitigation control system based on cloud computing
CN112572437A (en) * 2020-12-18 2021-03-30 北京京东乾石科技有限公司 Method, device and equipment for determining driving direction and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4326992B2 (en) * 2004-03-17 2009-09-09 アルパイン株式会社 Peripheral vehicle identification method and driving support system using the same
JP5888164B2 (en) * 2012-07-20 2016-03-16 トヨタ自動車株式会社 Vehicle periphery monitoring device, vehicle periphery monitoring system
CN107924465B (en) * 2016-03-18 2021-09-10 Jvc 建伍株式会社 Object recognition device, object recognition method, and storage medium
CN109229206A (en) * 2018-08-02 2019-01-18 长安大学 The detection method and system of a kind of vehicle and its steering angle
CN109285169B (en) * 2018-08-13 2022-08-26 东南大学 Road rescue equipment side direction towing induction method based on wheel identification
CN109509223A (en) * 2018-11-08 2019-03-22 西安电子科技大学 Front vehicles distance measuring method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005096647A (en) * 2003-09-25 2005-04-14 Auto Network Gijutsu Kenkyusho:Kk On-vehicle display device
CN112071092A (en) * 2020-09-21 2020-12-11 汪秒 Road traffic intelligent mitigation control system based on cloud computing
CN112572437A (en) * 2020-12-18 2021-03-30 北京京东乾石科技有限公司 Method, device and equipment for determining driving direction and storage medium

Also Published As

Publication number Publication date
CN116092039A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN109764881B (en) Unmanned vehicle testing method and device, electronic equipment and medium
US20200247404A1 (en) Information processing device, information processing system, information processing method, and program
CN113804214B (en) Vehicle positioning method and device, electronic equipment and computer readable storage medium
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN112183381A (en) Method and device for detecting driving area of vehicle
CN112507874B (en) Method and device for detecting motor vehicle jamming behavior
CN116092039B (en) Display control method and device of automatic driving simulation system
US20200250970A1 (en) Information processing apparatus, information processing method and program
CN111928868A (en) Navigation map road name display method and device and electronic equipment
CN116580235A (en) Target detection device, method, equipment and medium based on YOLOv4 network optimization
CN113642521B (en) Traffic light identification quality evaluation method and device and electronic equipment
US11267477B2 (en) Device and method for estimating the attention level of a driver of a vehicle
CN113591543B (en) Traffic sign recognition method, device, electronic equipment and computer storage medium
US20210286078A1 (en) Apparatus for tracking object based on lidar sensor and method therefor
CN114565766B (en) BiSeNet V2-based pavement image semantic segmentation method and BiSeNet V-based pavement image semantic segmentation device
CN113743340B (en) Computer vision network model optimization method and related device for automatic driving
CN117912289B (en) Vehicle group driving early warning method, device and system based on image recognition
CN115731523A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN113538546B (en) Target detection method, device and equipment for automatic driving
CN116244398A (en) High-precision map disconnection marking method and device, electronic equipment and storage medium
CN116381698B (en) Road remains detection method and device and electronic equipment
CN113642533B (en) Lane level positioning method and electronic equipment
CN117710926A (en) High-precision map lane line processing method and device, electronic equipment and storage medium
CN118196760A (en) Large vehicle detection method, computer program product, detection device and vehicle
CN116005583A (en) Zebra stripes generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant