WO2019047576A1 - 对自动驾驶车辆的摄像头进行自动控制的方法和*** - Google Patents

对自动驾驶车辆的摄像头进行自动控制的方法和*** Download PDF

Info

Publication number
WO2019047576A1
WO2019047576A1 PCT/CN2018/089981 CN2018089981W WO2019047576A1 WO 2019047576 A1 WO2019047576 A1 WO 2019047576A1 CN 2018089981 W CN2018089981 W CN 2018089981W WO 2019047576 A1 WO2019047576 A1 WO 2019047576A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
video
vehicle
angle
error
Prior art date
Application number
PCT/CN2018/089981
Other languages
English (en)
French (fr)
Inventor
张云飞
郁浩
闫泳杉
郑超
唐坤
姜雨
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2019047576A1 publication Critical patent/WO2019047576A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback

Definitions

  • the present disclosure relates to automatic driving of a vehicle, and more particularly to control of a camera of an autonomous vehicle.
  • Autonomous driving is a smart car that is driven by a computer system.
  • Automated driving vehicles rely on artificial intelligence, visual computing, radar, surveillance devices and global positioning systems to work together to allow the computer to operate the vehicle automatically and safely without any human active operation.
  • the camera is the main sensor for autonomous driving, and its attitude is especially important for the entire system. During the automatic driving of the vehicle, there is an error in the position and angle of the camera, which is disadvantageous for autonomous driving.
  • a system for implementing automatic control of a camera of an autonomous vehicle including:
  • a camera mounted on the vehicle to capture video
  • a processor configured to process the video to determine an error in position and/or angle of the camera
  • the drive unit is connected to the camera and drives the camera to change position and/or angle based on the error.
  • the driving device includes:
  • connection portion configured to be connected to the camera
  • a communication interface communicatively coupled to the processor, configured to receive a signal from the processor indicative of an error in the position and/or angle of the camera;
  • a movable portion that is connected to the connection portion and adjusts the position and/or angle of the camera through the connection portion according to a signal from the communication interface.
  • the movable portion has a first rotating shaft and a second rotating shaft, and can drive the camera to rotate along at least one of the rotating shafts.
  • the movable portion further has a third rotating shaft and can drive the camera to rotate along the third rotating shaft.
  • the processor processes the video using a deep learning network.
  • the processor includes: a first unit configured to extract an image of the at least one moment from the video; and a second unit configured to compare the image with an existing image in the deep learning network to obtain a position of the camera and / or the angle of the error with the global optimal value.
  • the processor further includes: a third unit configured to determine vehicle control information based on the video.
  • the third unit provides vehicle control information to the automated driving system of the vehicle for controlling the automatic driving process.
  • a driving device for a camera of an autonomous vehicle including:
  • connection portion configured to be connected to the camera
  • a communication interface configured to receive a signal indicative of an error in the position and/or angle of the camera
  • a movable portion that is connected to the connecting portion and adjusts the position and/or angle of the camera through the connecting portion according to the signal.
  • the movable portion has a first rotating shaft and a second rotating shaft, and can drive the camera to rotate along at least one of the rotating shafts.
  • the movable portion further has a third rotating shaft and can drive the camera to rotate along the third rotating shaft.
  • a method for automatically controlling a camera of an autonomous vehicle including:
  • the drive device changes the position and/or angle based on the error by the drive.
  • step c includes: the driving device drives the camera to rotate along at least one rotating shaft of the driving device.
  • step b includes processing the video using a deep learning network.
  • processing the video by using the deep learning network comprises: extracting an image of the at least one moment from the video;
  • the extracted image is compared with an existing image in the deep learning network to obtain an error with a global optimum value of the position and/or angle of the camera.
  • the method further includes determining vehicle control information based on the video.
  • the method further includes providing vehicle control information to the automated driving system of the vehicle for controlling the automatic driving process.
  • an automatic driving system for a vehicle including the aforementioned system for realizing automatic control of a camera of an automatically driven vehicle.
  • the embodiment of the present invention has the following advantages: 1. By automatically controlling the position and/or angle of the camera, the problem that the image and the initial calibration are inconsistent due to long-time movement of the vehicle is solved; 2. The camera The video captured by itself is used for the determination of the error, and finally used for the adjustment of the position and/or angle of the camera, realizing the closed-loop control; 3. The above control can be continuously updated according to the latest video data to always adapt to the complexity.
  • the automatic driving environment 4.
  • the proposed method and system can be applied to the environment where the artificially driven vehicle and the self-driving vehicle are mixed, and the environment in which only the self-driving vehicle exists is highly adaptable.
  • FIG. 1 shows a schematic diagram of a driving environment to which the method and system are applicable in accordance with an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a system for implementing automatic control of a camera of an autonomously driven vehicle, in accordance with an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a driving device of a camera of an autonomous vehicle according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a method of automatically controlling a camera of an autonomous vehicle in accordance with an embodiment of the present invention.
  • Computer device also referred to as “computer” in the context, is meant an intelligent electronic device that can perform predetermined processing, such as numerical calculations and/or logical calculations, by running a predetermined program or instruction, which can include a processor and The memory is executed by the processor to execute a predetermined process pre-stored in the memory to execute a predetermined process, or is executed by hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
  • Computer devices include, but are not limited to, servers, personal computers, laptops, tablets, smart phones, and the like.
  • the computer device includes a user device and a network device.
  • the user equipment includes, but is not limited to, a computer, a smart phone, a PDA, etc.
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing based computer Or a cloud composed of a network server, wherein cloud computing is a type of distributed computing, a super virtual computer composed of a group of loosely coupled computers.
  • the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future possible computer equipment or networks, such as those applicable to the present invention, are also included in the scope of the present invention. It is included here by reference.
  • the scenario is an environment 1 at an intersection in which there are three cars 11, 12 and 13 that are driving or waiting, and a number of pedestrians 14 that are crossing the crosswalk.
  • each car is equipped with a camera 15 for automatic driving.
  • the camera 15 provides road information to the on-vehicle automatic driving system by taking a video, and the automatic driving system generates control information based on current road conditions and the like, and the control information acts on each device and component of the automobile to start, continue or stop the automatic driving.
  • the automatic driving device includes: a position acquisition unit that acquires position information of another vehicle that automatically drives and manually drives another vehicle; a route search unit that searches for a route; and a calculation unit that searches for a plurality of routes by the route search unit, a ratio of the number of other vehicles that are automatically driven on the path of each path to the total of all other vehicles on the path based on the position information of the other vehicle and the other vehicle that is manually driven; the selection unit is selected by the operation unit
  • the calculated maximum path for automatically driving other vehicles is a route for the own vehicle to travel; and the control unit performs automatic driving of the own vehicle to travel along the selected route.
  • Autopilot system devices such as these can be used for the autonomous driving mentioned herein.
  • each vehicle in the environment 1 has a camera 15
  • the self-driving vehicle using the method and system of the present invention can also be in a driving environment with a manually driven vehicle without causing a driving environment.
  • Adverse effects, on the contrary, are often beneficial to the driving environment as the behavior of autonomous vehicles is protected and improved.
  • FIG. 1 the car 11 is ready to pass through the intersection in the direction D2.
  • the car 13 has basically passed the intersection in the direction D2 and continues to travel.
  • the driving direction of the car 12 is D4, and is waiting for the green light.
  • the plurality of pedestrians 14 are in the direction of D1 or D2. Through the crosswalk.
  • the long-term movement of the car may cause the image captured by the camera to be inconsistent with the initial calibration.
  • the inventors of the present invention first realized that the posture of the camera 15 is required. (ie position and / or angle (angle can also be called attitude angle)) make the necessary adjustments. The specific manner will be further explained below in conjunction with the drawings.
  • System 2 is a schematic diagram of a system for implementing automatic control of a camera of an autonomous vehicle.
  • System 2 primarily includes a camera 25, primarily for capturing video, a drive 26 coupled to camera 25, and a processor 27 communicatively coupled to camera 25 and drive unit 26.
  • Processor 28 processes the video and obtains an error in the position and/or angle of camera 25, which is provided to drive unit 26, which adjusts the position and/or angle of camera 25 based on the error.
  • the processor 27 is operatively in communication with the memory 28 for performing the data access required to perform its functions, and the processor 27 and memory 28 may be in-vehicle, such that the cloud processor and memory may be used to provide transmission delays, etc.
  • the problems which are the auto-driving environments that are required for safety, are avoided as much as possible.
  • the number of cameras 25 may be one or more. Its position can be located at the front of the car so that it is advantageously able to look substantially straight ahead of the direction of travel of the car. Of course, it is also feasible to perform the teachings of the present invention using a video captured by a camera located on the side of the vehicle body or even behind. In order to facilitate the provision of vehicle control information to the automatic driving system, the video captured by the camera 25 should be used to identify the following: lane lines in front of the vehicle, traffic signs, traffic signs and obstacles (pedestrians, other vehicles, etc.), and then A camera (not shown) can identify lane lines and obstacles behind the vehicle, which can share the same processor for video processing and determination of respective pose errors.
  • the calibration method mentioned in the Chinese Patent Application Publication No. CN103077518A can also be used to generate the aforementioned error value.
  • the present disclosure tends to implement the above-described deep learning network-based implementation, it is not limited thereto.
  • FIG. 3 is a schematic diagram of a driving device of a camera of an autonomous vehicle according to an embodiment of the present invention. It can be seen that the camera 35 captures a video, and the captured video is transmitted to the processor 37 through a communication interface with the processor 37 and processed. The result of the processing is an error value meaningful to the pose of the camera 35. The value is communicated to the drive device via communication interface 362 between processor 37 and drive unit 36.
  • the drive unit 36 is mechanically coupled to the camera 35 via a connection portion 361, and the position and/or angle of the camera 35 is adjusted accordingly according to a signal (error value) from the communication interface 362 by the movable portion 363.
  • the movable portion may have a first rotation axis a and a second rotation axis b, and may drive the camera to rotate along at least one of the rotation axes. Further, the movable portion may further have a third rotation axis c, and can drive the camera to rotate along the third rotation axis. It should be understood that the above-described arrangement of the rotating shaft may have other variations, and is not limited to the above examples.
  • processor 37 processes the video captured by camera 35 using a deep learning network, in cooperation with (if desired) memory, in accordance with one embodiment.
  • the processor 37 can be implemented to include a first unit and a second unit.
  • the first unit is configured to extract an image of at least one time from the video
  • the second unit is configured to compare the image with an existing image in the deep learning network to obtain a position and/or an angle of the camera 35. The error of the global optimal value.
  • the processor 35 may further include a third unit configured to determine vehicle control information based on the video and provide the vehicle control information to the automatic driving system of the vehicle for controlling the automatic driving process, including but not limited to Steal or brake while avoiding obstacles, continue driving, turning, etc.
  • the camera 35 will capture a new video and continue to provide to the processor 37.
  • the processor 37 will obtain new error value information, which should generally be Better than before.
  • the processor 37 maintains a threshold. When the adjusted error is less than the threshold, it may be considered that the pose of the camera 35 need not be adjusted again until the next time it is triggered, for example, after the new video processing, the error is again high. At this threshold.
  • FIG. 4 is a schematic flow chart of a method of automatically controlling a camera of an autonomous vehicle in accordance with an embodiment of the present invention.
  • the method can include:
  • Step 41 capturing a video through a camera of the vehicle
  • Step 42 processing the video with a computer to determine an error in position and/or angle of the camera
  • Step 43 The camera changes the position and/or angle according to the error by the driving device.
  • step 43 the drive unit drives the camera to rotate along at least one axis of rotation of the drive unit.
  • a computer e.g., a processor
  • processes the video using a deep learning network e.g., a neural network
  • the step of processing the video using the deep learning network includes extracting an image of the at least one time from the video; and comparing the image to an existing image in the deep learning network to derive the camera The error of the position and/or angle with a global optimum.
  • the method further comprises determining vehicle control information based on the video, and providing vehicle control information to an automated driving system of the vehicle for controlling an automated driving process.
  • the calibration method mentioned in the Chinese Patent Application Publication No. CN103077518A can also be used to generate the aforementioned error value.
  • the present disclosure tends to implement the above-described deep learning network-based implementation, it is not limited thereto.
  • step 41 the camera will capture a new video and continue to provide to the processor.
  • step 42 after processing the updated video, the processor will obtain new error value information. Value information should generally be improved over the previous period.
  • the processor maintains a threshold. When the adjusted error is less than the threshold, it may be considered that the step of the camera is not required to be performed again to adjust the pose of the camera until the next time it is triggered, for example, the error is found after the new video processing. Above this threshold again.
  • the automatic driving system mentioned in the embodiments of the present invention, the system for implementing automatic camera control, and the processor include, but are not limited to, a computer device and an automotive electronic device.
  • a computer device refers to an intelligent electronic device that can perform a predetermined process such as numerical calculation and/or logic calculation by running a predetermined program or instruction, which can include a processor and a memory, which are executed by the processor to execute a pre-stored survival instruction in the memory.
  • the predetermined processing is performed by a hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
  • Computer devices include, but are not limited to, servers, personal computers, notebook computers, tablets, smart phones, and the like.
  • the server includes, but is not limited to, a single server, a plurality of servers, or a cloud-based cloud composed of a large number of computers or servers, wherein the cloud computing is a type of distributed computing, a super virtual composed of a group of loosely coupled computers.
  • computer Automotive electronics are electronic devices used in or related to automobiles.
  • the present invention can be implemented in software and/or a combination of software and hardware.
  • the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device.
  • the software program of the present invention may be executed by a processor to implement the steps or functions described above.
  • the software programs (including related data structures) of the present invention can be stored in a computer readable medium.
  • the computer readable medium can be a computer readable signal medium or a computer readable storage medium.
  • the computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above.
  • a computer readable storage medium can be any tangible medium that can contain or store a program, which can be used by or in connection with an instruction execution system, apparatus or device.
  • a computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying computer readable program code.
  • Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer readable signal medium can also be any computer readable medium other than a computer readable storage medium, which can transmit, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium can be transmitted by any suitable medium, including but not limited to wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present invention may be written in one or more programming languages, or a combination thereof, including an object oriented programming language such as Java, Smalltalk, C++, and conventional Procedural programming language—such as C language or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider) Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet service provider
  • steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明提供了一种实现对自动驾驶车辆的摄像头进行自动控制的***,包括:安装于所述车辆的摄像头,用于捕捉视频;处理器,配置为处理所述视频,以确定所述摄像头的位置和/或角度的误差;驱动装置,连接到所述摄像头并根据所述误差驱动所述摄像头改变位置和/或角度。由此,解决了由于车辆长时间运动造成图像和最初标定不一致的问题。摄像头自身拍摄到的视频被用于误差的确定,并最终用于摄像头的位置和/或角度的调整,实现了真正的闭环控制。

Description

对自动驾驶车辆的摄像头进行自动控制的方法和***
相关申请的交叉引用
本申请要求百度在线网络技术(北京)有限公司于2017年09月05日提交的、发明名称为“对自动驾驶车辆的摄像头进行自动控制的方法和***”的、中国专利申请号“201710792884.2”的优先权。
技术领域
本公开涉及车辆的自动驾驶,尤其涉及对自动驾驶车辆的摄像头的控制。
背景技术
自动驾驶汽车,是一种通过电脑***实现无人驾驶的智能汽车。自动驾驶汽车依靠人工智能、视觉计算、雷达、监控装置和全球定位***协同合作,让电脑可以在没有任何人类主动的操作下,自动安全地操作机动车辆。
摄像头是自动驾驶的主要的传感器,其姿态对整个***尤为重要。在车辆的自动驾驶过程中,摄像头的位置、角度会出现误差,这对自动驾驶是不利的。
尽管自动驾驶领域有着日新月异的发展,对摄像头的位姿的自动控制仍是需要我们研究和解决的一个问题。
发明内容
根据本发明的实施例,希望提供一种能够对自动驾驶车辆的摄像头进行自动控制的方法和***,通过该方法和***,希望使摄像头基本始终处在有利于自动驾驶的位置和姿态,以保证自动驾驶的稳定性和安全性。
根据本发明的一个方面的实施例,提供了一种实现对自动驾驶车辆的摄像头进行自动控制的***,包括:
安装于车辆的摄像头,用于捕捉视频;
处理器,配置为处理视频,以确定摄像头的位置和/或角度的误差;
驱动装置,连接到摄像头并根据误差驱动摄像头改变位置和/或角度。
进一步地,驱动装置包括:
连接部,其配置为与摄像头连接;
通信接口,其与处理器可通信地连接,配置为接收来自处理器的表示摄像头的位置和/或角度的误差的信号;
可动部,其连接到连接部,并根据来自通信接口的信号通过连接部调节摄像头的位置和/或角度。
进一步地,可动部具有第一旋转轴、第二旋转轴,并可带动摄像头沿其中至少一个旋转轴转动。
更进一步地,可动部还具有第三旋转轴,并可带动摄像头沿第三旋转轴转动。
进一步地,处理器使用深度学习网络来处理该视频。
进一步地,处理器包括:第一单元,配置为从视频中提取至少一个时刻的图像;第二单元,配置为将图像与深度学习网络中的已有图像进行比对,得出摄像头的位置和/或角度的具有全局最优值的误差。
进一步地,处理器还包括:第三单元,配置为基于视频确定车辆控制信息。
进一步地,第三单元将车辆控制信息提供给车辆的自动驾驶***以用于控制自动驾驶过程。
根据本发明的另一方面的实施例,提供了一种自动驾驶车辆的摄像头的驱动装置,包括:
连接部,其配置为与摄像头连接;
通信接口,其配置为接收指示摄像头的位置和/或角度的误差的信号;
可动部,其连接到连接部,并根据信号通过连接部调整摄像头的位置和/或角度。
进一步地,可动部具有第一旋转轴、第二旋转轴,并可带动所述摄像头沿其中至少一个旋转轴转动。
进一步地,可动部还具有第三旋转轴,并可带动摄像头沿第三旋转轴转动。
根据本发明的又一方面,提供了一种对自动驾驶车辆的摄像头进行自动控制的方法,包括:
a.通过车辆的摄像头捕捉视频;
b.利用计算机处理所述视频,以确定摄像头的位置和/或角度的误差;
c.由驱动装置根据该误差驱动摄像头改变位置和/或角度。
进一步地,步骤c包括:驱动装置带动摄像头沿驱动装置的至少一个旋转轴转动。
进一步地,步骤b包括:使用深度学习网络处理所述视频。
进一步地,其中,所述使用深度学习网络处理视频,包括:从视频中提取至少一个时刻的图像;和
将提取出的图像与深度学习网络中的已有图像进行比对,得出摄像头的位置和/或角度的具有全局最优值的误差。
进一步地,该方法还包括:基于视频确定车辆控制信息。
进一步地,该方法还包括:将车辆控制信息提供给该车辆的自动驾驶***以用于控制自动驾驶过程。
根据本发明的又一方面的实施例,提供了一种车辆的自动驾驶***,其中包括前述的实现对自动驾驶车辆的摄像头进行自动控制的***。
与现有技术相比,本发明的实施例具有以下优点:1.通过对摄像头的位置和/或角度的自动控制,解决了由于车辆长时间运动造成图像和最初标定不一致的问题;2.摄像头自身拍摄到的视频被用于误差的确定,并最终用于摄像头的位置和/或角度的调整,实现了真正的闭环控制;3.可以根据最新的视频数据不断更新上述控制,以始终适应复杂的自动驾驶环境;4.提出的方法和***可以适用于人工驾驶车辆与自动驾驶车辆混行的环境,以及只存在自动驾驶车辆的环境,适应性强。
附图说明
通过阅读以下参照附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1示出了根据本发明的实施例的方法与***适用的行驶环境的示意图;
图2为根据本发明的实施例的实现对自动驾驶车辆的摄像头进行自动控制的***的示意图;
图3为根据本发明的实施例的自动驾驶车辆的摄像头的驱动装置的示意图;
图4为根据本发明的实施例的对自动驾驶车辆的摄像头进行自动控制的方法的示意性流程图。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
在上下文中所称“计算机设备”,也称为“电脑”,是指可以通过运行预定程序或指令来执行数值计算和/或逻辑计算等预定处理过程的智能电子设备,其可以包括处理器与存储器,由处理器执行在存储器中预存的存续指令来执行预定处理过程,或是由ASIC、FPGA、DSP等硬件执行预定处理过程,或是由上述二者组合来实现。计算机设备包括但不限于服务器、 个人电脑、笔记本电脑、平板电脑、智能手机等。
所述计算机设备包括用户设备与网络设备。其中,所述用户设备包括但不限于电脑、智能手机、PDA等;所述网络设备包括但不限于单个网络服务器、多个网络服务器组成的服务器组或基于云计算(Cloud Computing)的由大量计算机或网络服务器构成的云,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。其中,所述计算机设备可单独运行来实现本发明,也可接入网络并通过与网络中的其他计算机设备的交互操作来实现本发明。其中,所述计算机设备所处的网络包括但不限于互联网、广域网、城域网、局域网、VPN网络等。
需要说明的是,所述用户设备、网络设备和网络等仅为举例,其他现有的或今后可能出现的计算机设备或网络如可适用于本发明,也应包含在本发明保护范围以内,并以引用方式包含于此。
后面所讨论的方法(其中一些通过流程图示出)可以通过硬件、软件、固件、中间件、微代码、硬件描述语言或者其任意组合来实施。当用软件、固件、中间件或微代码来实施时,用以实施必要任务的程序代码或代码段可以被存储在机器或计算机可读介质(比如存储介质)中。(一个或多个)处理器可以实施必要的任务。
这里所公开的具体结构和功能细节仅仅是代表性的,并且是用于描述本发明的示例性实施例的目的。但是本发明可以通过许多替换形式来具体实现,并且不应当被解释成仅仅受限于这里所阐述的实施例。
应当理解的是,当一个单元被称为“连接”或“耦合”到另一单元时,其可以直接连接或耦合到所述另一单元,或者可以存在中间单元。与此相对,当一个单元被称为“直接连接”或“直接耦合”到另一单元时,则不存在中间单元。应当按照类似的方式来解释被用于描述单元之间的关系的其他词语(例如“处于...之间”相比于“直接处于...之间”,“与...邻近”相比于“与...直接邻近”等等)。
应当理解的是,虽然在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制。使用这些术语仅仅是为了将一个单元与另一个单元进行区分。举例来说,在不背离示例性实施例的范围的情况下,第一单元可以被称为第二单元,并且类似地第二单元可以被称为第一单元。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意组合。
这里所使用的术语仅仅是为了描述具体实施例而不意图限制示例性实施例。除非上下文明确地另有所指,否则这里所使用的单数形式“一个”、“一项”还意图包括复数。还应当理解的是,这里所使用的术语“包括”和/或“包含”规定所陈述的特征、整数、步骤、操作、单元和/或组件的存在,而不排除存在或添加一个或更多其他特征、整数、步骤、操作、单 元、组件和/或其组合。
还应当提到的是,在一些替换实现方式中,所提到的功能/动作可以按照不同于附图中标示的顺序发生。举例来说,取决于所涉及的功能/动作,相继示出的两幅图实际上可以基本上同时执行或者有时可以按照相反的顺序来执行。
下面结合附图对本发明作进一步详细描述。
图1示出了根据本发明的实施例的方法与***适用的行驶环境的示意图。图示情景为一个十字路口处的环境1,该环境中有三辆正在行驶或等候的汽车11、12和13,正在通过人行横道的多个行人14。其中,每辆汽车上都配有一部用于自动驾驶的摄像头15。摄像头15通过拍摄视频,将道路信息提供给车载的自动驾驶***,由自动驾驶***根据当前的路况等信息生成控制信息,控制信息则作用于汽车的各个设备、部件以开始、继续或停止自动驾驶。中国发明专利申请201610027742.2提出了一种自动驾驶装置,在搜索到多个路径的情况下选择与自动驾驶相关的运算量低的路径。自动驾驶装置具备:位置取得部,其取得自动驾驶其他车辆和手动驾驶其他车辆的位置信息;路径搜索部,其搜索路径;运算部,其在由路径搜索部搜索到多个路径的情况下,基于自动驾驶其他车辆和手动驾驶其他车辆的位置信息,针对每个路径运算路径上的自动驾驶其他车辆的数量相对于路径上的所有其他车辆的合计数的比例;选择部,其选择由运算部运算出的自动驾驶其他车辆的比例最大的路径作为自身车辆要行驶的路径;以及控制部,其进行自身车辆的自动驾驶以使其沿着所选择的路径行驶。诸如此类的自动驾驶***装置可以被用于本文中提及的自动驾驶。
虽然环境1中的各车辆均具有摄像头15,需要注意的是,使用本发明的方法和***的自动驾驶车辆也可以与人工驾驶的车辆并存在一个驾驶环境中,且不会给驾驶环境带来不利的影响,相反,由于自动驾驶车辆的行为被保护、改善,其对驾驶环境往往是有利的。
图1中示出了4个方向,D1-D4。参看图1,汽车11正沿方向D2准备通过路口,汽车13沿方向D2基本已通过路口并继续行驶,汽车12的行驶方向为D4,并正在等待绿灯,多个行人14正在沿D1或D2方向通过人行横道。
汽车长时间运动可能会摄像头拍摄到的图像与最初标定不一致的问题,为了保护自动驾驶汽车本身以及环境1中的其它汽车、行人,本发明的发明人首先意识到,需要对摄像头15的位姿(也即位置和/或角度(角度也可称姿态角))进行必要的调整。具体方式将在下文结合附图进一步阐述。
图2为用来实现自动驾驶车辆的摄像头的自动控制的***的示意图。***2主要包括摄像头25,主要用来捕捉视频,连接到摄像头25的驱动装置26,与摄像头25和驱动装置26可通信地连接的处理器27。处理器28处理该视频并得到摄像头25的位置和/或角度的误差,将其提供给驱动装置26,后者根据该误差调整摄像头25的位置和/或角度。处理器 27可操作地与存储器28通信,以进行执行其功能所需要的数据存取,处理器27和存储器28可以是车载的,这样可以使用云端处理器和存储器所可能带来的传输延迟等问题,而这些问题是自动驾驶这个对安全性要求很高的使用环境所希望尽量避免的。
摄像头25的数量可以是一个或是多个。其位置可以位于汽车的前部,以使其有利地能够基本平视汽车行驶方向的正前方。当然,使用位于车体侧方甚至后方的摄像头捕捉到的视频来执行本发明的主旨也是可行的。为利于给自动驾驶***提供车辆控制信息,摄像头25所捕捉到的视频应可以用于识别以下内容:车辆前方的车道线、交通标线、交通标志和障碍物(行人、其他车辆等),而后摄像头(未示出)可以识别车辆后方的车道线和障碍物,它们可以共用同一处理器来进行视频的处理和各自位姿误差的确定。
可选地,中国发明专利申请公开CN103077518A中提及的标定方法也可用于生成前述误差值,本发明公开虽然倾向于上述的基于深度学习网络的实现方式,但并不局限于此。
图3为根据本发明的实施例的自动驾驶车辆的摄像头的驱动装置的示意图。可见,摄像头35拍摄视频,其所拍摄到的视频通过与处理器37之间的通信接口传送到处理器37并被处理,处理结果是一个对摄像头35的位姿有意义的误差值,该误差值通过处理器37与驱动装置36之间的通信接口362传送到驱动装置。驱动装置36通过一个连接部361机械连接到摄像头35,并通过可动部363来根据来自通信接口362的信号(误差值)来相应调整摄像头35的位置和/或角度。
参照图2,可动部可以具有第一旋转轴a、第二旋转轴b,并可带动摄像头沿其中至少一个旋转轴转动。进一步地,可动部还可以具有第三旋转轴c,并可带动摄像头沿第三旋转轴转动。应当理解,上述旋转轴的设置可以有其它变化,而不受限于上述例子。
参照图3,根据一个实施例,在(如需要)存储器的配合下,处理器37使用深度学习网络来处理摄像头35拍摄到的视频。具体地,处理器37可以实现为包括第一单元和第二单元。其中第一单元配置为从视频中提取至少一个时刻的图像,而第二单元则配置为将图像与深度学习网络中的已有图像进行比对,得出摄像头35的位置和/或角度的具有全局最优值的误差。
进一步地,处理器35还可以还包括第三单元,其配置为基于视频确定车辆控制信息,并将该车辆控制信息提供给车辆的自动驾驶***以用于控制自动驾驶过程,包括但不限于为躲避障碍物而转向或刹车,继续行驶,转向等。
优选地,在一次调整后,摄像头35将捕捉到新的视频,并继续提供给处理器37,处理器37处理更新后的视频后,会得到新的误差值信息,这个误差值信息一般应是比之前有所改善的。在一个例子中,处理器37维护一个阈值,在调整后的误差小于该阈值时,即可考虑无需再次调整摄像头35的位姿,直至下一次被触发,例如新的视频处理后发现误差再次 高于该阈值。
图4为根据本发明的实施例的对自动驾驶车辆的摄像头进行自动控制的方法的示意性流程图。该方法可以包括:
步骤41:通过所述车辆的摄像头捕捉视频;
步骤42:利用计算机处理所述视频,以确定所述摄像头的位置和/或角度的误差;
步骤43:由驱动装置根据所述误差驱动所述摄像头改变位置和/或角度。
在一个优选实施例中,步骤43中,驱动装置带动摄像头沿驱动装置的至少一个旋转轴转动。
在一个优选实施例中,步骤42中,计算机(例如,处理器)使用深度学习网络处理该视频。
该方法的各个步骤特征与上文结合图1-3介绍的***、驱动装置的特征对应,可参照上述实施例的相关描述理解和实现该方法。
在一个非限定性实施例中,使用深度学习网络处理该视频的步骤包括:从视频中提取至少一个时刻的图像;和将图像与深度学习网络中的已有图像进行比对,得出摄像头的位置和/或角度的具有全局最优值的误差。
可选地,该方法还包括:基于所述视频确定车辆控制信息,以及,将车辆控制信息提供给所述车辆的自动驾驶***以用于控制自动驾驶过程。
可选地,中国发明专利申请公开CN103077518A中提及的标定方法也可用于生成前述误差值,本发明公开虽然倾向于上述的基于深度学习网络的实现方式,但并不局限于此。
优选地,在一次调整后,步骤41中,摄像头将捕捉到新的视频,并继续提供给处理器,步骤42中,处理器处理更新后的视频后,会得到新的误差值信息,这个误差值信息一般应是比之前有所改善的。在一个例子中,处理器维护一个阈值,在调整后的误差小于该阈值时,即可考虑无需再次执行步骤43来调整摄像头的位姿,直至下一次被触发,例如新的视频处理后发现误差再次高于该阈值。
本发明实施例中提及的自动驾驶***、用于实现摄像头自动控制的***、处理器包括但不限于计算机设备和汽车电子设备。计算机设备是指可以通过运行预定程序或指令来执行数值计算和/或逻辑计算等预定处理过程的智能电子设备,其可以包括处理器与存储器,由处理器执行在存储器中预存的存续指令来执行预定处理过程,或是由ASIC、FPGA、DSP等硬件执行预定处理过程,或是由上述二者组合来实现。计算机设备包括但不限于服务器、个人电脑、笔记本电脑、平板电脑、智能手机等。服务器包括但不限于单个服务器、多个服务器组成或基于云计算的由大量计算机或服务器构成的云,其中,云计算是分布式计算 的一种,由一群松散耦合的计算机集组成的一个超级虚拟计算机。汽车电子设备是在汽车上使用或与汽车相关的电子设备。
需要注意的是,本发明的至少一部分可在软件和/或软件与硬件的组合体中被实施,例如,本发明的各个装置可采用专用集成电路(ASIC)或任何其他类似硬件设备来实现。在一个实施例中,本发明的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本发明的软件程序(包括相关的数据结构)可以被存储到计算机可读介质中。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括——但不限于——电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、电线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言或其组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如C语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
另外,本发明的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权 利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。***权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (18)

  1. 一种实现对自动驾驶车辆的摄像头进行自动控制的***,包括:
    安装于所述车辆的摄像头,用于捕捉视频;
    处理器,配置为处理所述视频,以确定所述摄像头的位置和/或角度的误差;
    驱动装置,连接到所述摄像头并根据所述误差驱动所述摄像头改变位置和/或角度。
  2. 根据权利要求1所述的***,其中,所述驱动装置包括:
    连接部,其配置为与所述摄像头连接;
    通信接口,其与所述处理器可通信地连接,配置为接收来自所述处理器的表示所述摄像头的位置和/或角度的误差的信号;
    可动部,其连接到所述连接部,并根据所述信号通过所述连接部调节所述摄像头的位置和/或角度。
  3. 根据权利要求2所述的***,其中,所述可动部具有第一旋转轴、第二旋转轴,并可带动所述摄像头沿其中至少一个旋转轴转动。
  4. 根据权利要求3所述的***,其中,所述可动部还具有第三旋转轴,并可带动所述摄像头沿所述第三旋转轴转动。
  5. 根据权利要求1至4中任一项所述的***,其中,所述处理器使用深度学习网络来处理所述视频。
  6. 根据权利要求5所述的***,其中,所述处理器包括:
    第一单元,配置为从所述视频中提取至少一个时刻的图像;
    第二单元,配置为将所述图像与所述深度学习网络中的已有图像进行比对,得出所述摄像头的位置和/或角度的具有全局最优值的误差。
  7. 根据权利要求5所述的***,其中,所述处理器还包括:
    第三单元,配置为基于所述视频确定车辆控制信息。
  8. 根据权利要求7所述的***,其中,所述第三单元将所述车辆控制信息提供给所述车辆的自动驾驶***以用于控制自动驾驶过程。
  9. 一种自动驾驶车辆的摄像头的驱动装置,包括:
    连接部,其配置为与所述摄像头连接;
    通信接口,其配置为接收指示所述摄像头的位置和/或角度的误差的信号;
    可动部,其连接到所述连接部,并根据所述信号通过所述连接部调整所述摄像头的位置和/或角度。
  10. 根据权利要求9所述的驱动装置,其中,所述可动部具有第一旋转轴、第二旋转 轴,并可带动所述摄像头沿其中至少一个旋转轴转动。
  11. 根据权利要求10所述的驱动装置,其中,所述可动部还具有第三旋转轴,并可带动所述摄像头沿所述第三旋转轴转动。
  12. 一种对自动驾驶车辆的摄像头进行自动控制的方法,包括:
    a.通过所述车辆的摄像头捕捉视频;
    b.利用计算机处理所述视频,以确定所述摄像头的位置和/或角度的误差;
    c.由驱动装置根据所述误差驱动所述摄像头改变位置和/或角度。
  13. 根据权利要求12所述的方法,其中,所述步骤c包括:
    所述驱动装置带动所述摄像头沿所述驱动装置的至少一个旋转轴转动。
  14. 根据权利要求12或13所述的方法,其中,所述步骤b包括:
    使用深度学习网络处理所述视频。
  15. 根据权利要求14所述的方法,其中,所述使用深度学习网络处理所述视频,包括:
    从所述视频中提取至少一个时刻的图像;和
    将所述图像与所述深度学习网络中的已有图像进行比对,得出所述摄像头的位置和/或角度的具有全局最优值的误差。
  16. 根据权利要求15所述的方法,还包括:
    基于所述视频确定车辆控制信息。
  17. 根据权利要求16所述的方法,还包括:
    将所述车辆控制信息提供给所述车辆的自动驾驶***以用于控制自动驾驶过程。
  18. 一种车辆的自动驾驶***,其特征在于,包括如权利要求1至8中任一项所述的实现对自动驾驶车辆的摄像头进行自动控制的***。
PCT/CN2018/089981 2017-09-05 2018-06-05 对自动驾驶车辆的摄像头进行自动控制的方法和*** WO2019047576A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710792884.2 2017-09-05
CN201710792884.2A CN107728646B (zh) 2017-09-05 2017-09-05 对自动驾驶车辆的摄像头进行自动控制的方法和***

Publications (1)

Publication Number Publication Date
WO2019047576A1 true WO2019047576A1 (zh) 2019-03-14

Family

ID=61205690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089981 WO2019047576A1 (zh) 2017-09-05 2018-06-05 对自动驾驶车辆的摄像头进行自动控制的方法和***

Country Status (2)

Country Link
CN (1) CN107728646B (zh)
WO (1) WO2019047576A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107728646B (zh) * 2017-09-05 2020-11-10 百度在线网络技术(北京)有限公司 对自动驾驶车辆的摄像头进行自动控制的方法和***
EP3612857A4 (en) 2018-06-25 2020-02-26 Beijing Didi Infinity Technology and Development Co., Ltd. HIGH DEFINITION CARD ACQUISITION SYSTEM
CN109712196B (zh) * 2018-12-17 2021-03-30 北京百度网讯科技有限公司 摄像头标定处理方法、装置、车辆控制设备及存储介质
CN109703465B (zh) * 2018-12-28 2021-03-12 百度在线网络技术(北京)有限公司 车载图像传感器的控制方法和装置
CN110163930B (zh) * 2019-05-27 2023-06-27 北京百度网讯科技有限公司 车道线生成方法、装置、设备、***及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104991580A (zh) * 2015-06-18 2015-10-21 奇瑞汽车股份有限公司 无人驾驶车辆的控制***及其控制方法
CN106184793A (zh) * 2016-08-29 2016-12-07 上海理工大学 一种用于汽车的空中俯视影像监视***
JP2017027350A (ja) * 2015-07-22 2017-02-02 いすゞ自動車株式会社 走行制御装置および走行制御方法
CN106506956A (zh) * 2016-11-17 2017-03-15 歌尔股份有限公司 基于无人机的跟踪拍摄方法、跟踪拍摄装置及***
US20170097241A1 (en) * 2015-10-01 2017-04-06 Toyota Motor Engineering & Manufacturing North America, Inc. Personalized suggestion of automated driving features
CN206086571U (zh) * 2016-10-12 2017-04-12 鄂尔多斯市普渡科技有限公司 一种便于调节摄像头高度的无人驾驶汽车
CN107728646A (zh) * 2017-09-05 2018-02-23 百度在线网络技术(北京)有限公司 对自动驾驶车辆的摄像头进行自动控制的方法和***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000054008A1 (en) * 1999-03-11 2000-09-14 Intelligent Technologies International, Inc. Methods and apparatus for preventing vehicle accidents
CN102435442B (zh) * 2011-09-05 2013-11-13 北京航空航天大学 一种用于车辆道路试验的自动驾驶机器人
CN103886189B (zh) * 2014-03-07 2017-01-25 国家电网公司 用于无人机巡检的巡检结果数据处理***及方法
CN206100235U (zh) * 2016-10-31 2017-04-12 贵州恒兴智能科技有限公司 一种摄像头

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104991580A (zh) * 2015-06-18 2015-10-21 奇瑞汽车股份有限公司 无人驾驶车辆的控制***及其控制方法
JP2017027350A (ja) * 2015-07-22 2017-02-02 いすゞ自動車株式会社 走行制御装置および走行制御方法
US20170097241A1 (en) * 2015-10-01 2017-04-06 Toyota Motor Engineering & Manufacturing North America, Inc. Personalized suggestion of automated driving features
CN106184793A (zh) * 2016-08-29 2016-12-07 上海理工大学 一种用于汽车的空中俯视影像监视***
CN206086571U (zh) * 2016-10-12 2017-04-12 鄂尔多斯市普渡科技有限公司 一种便于调节摄像头高度的无人驾驶汽车
CN106506956A (zh) * 2016-11-17 2017-03-15 歌尔股份有限公司 基于无人机的跟踪拍摄方法、跟踪拍摄装置及***
CN107728646A (zh) * 2017-09-05 2018-02-23 百度在线网络技术(北京)有限公司 对自动驾驶车辆的摄像头进行自动控制的方法和***

Also Published As

Publication number Publication date
CN107728646A (zh) 2018-02-23
CN107728646B (zh) 2020-11-10

Similar Documents

Publication Publication Date Title
WO2019047576A1 (zh) 对自动驾驶车辆的摄像头进行自动控制的方法和***
EP3359436B1 (en) Method and system for operating autonomous driving vehicles based on motion plans
JP6720415B2 (ja) 自律走行車のための帯域幅制約画像処理
JP7355877B2 (ja) 車路協同自動運転の制御方法、装置、電子機器及び車両
US10717448B1 (en) Automated transfer of vehicle control for autonomous driving
US11789467B2 (en) Method, apparatus, terminal, and storage medium for elevation surrounding flight control
US10929995B2 (en) Method and apparatus for predicting depth completion error-map for high-confidence dense point-cloud
JP2021501714A (ja) 自動運転車両のための低速シーンにおける歩行者インタラクションシステム
CN111273655A (zh) 用于自动驾驶车辆的运动规划方法和***
US20180025635A1 (en) Connection of an autonomous vehicle with a second vehicle to receive goods
US20220105951A1 (en) Remotely supervised passenger intervention of an autonomous vehicle
US20200035099A1 (en) Distributing processing resources across local and cloud-based systems with respect to autonomous navigation
US11353872B2 (en) Systems and methods for selectively capturing and filtering sensor data of an autonomous vehicle
WO2019137559A1 (zh) 定向天线的盲区跟踪方法、其装置及移动跟踪***
CN115140090A (zh) 车辆控制方法、装置、电子设备和计算机可读介质
US11487289B1 (en) Autonomous vehicle repair
WO2018017094A1 (en) Assisted self parking
US20220284550A1 (en) System and method for increasing sharpness of image
CN111201420A (zh) 信息处理装置、自身位置推定方法、程序和移动体
JP2020149323A (ja) 情報処理装置及び情報処理装置を備える自動走行制御システム
WO2019047604A1 (zh) 一种确定数据采集路线的方法与装置
WO2023041014A1 (zh) 一种图像获取方法、装置、飞行器和存储介质
US11070714B2 (en) Information processing apparatus and information processing method
US20230080638A1 (en) Systems and methods for self-supervised learning of camera intrinsic parameters from a sequence of images
CN115848358B (zh) 车辆泊车方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18854977

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18854977

Country of ref document: EP

Kind code of ref document: A1