WO2023231653A1 - 一种车载相机的标定方法及装置、计算机设备、存储介质和产品 - Google Patents

一种车载相机的标定方法及装置、计算机设备、存储介质和产品 Download PDF

Info

Publication number
WO2023231653A1
WO2023231653A1 PCT/CN2023/090821 CN2023090821W WO2023231653A1 WO 2023231653 A1 WO2023231653 A1 WO 2023231653A1 CN 2023090821 W CN2023090821 W CN 2023090821W WO 2023231653 A1 WO2023231653 A1 WO 2023231653A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
mounted camera
position information
image
transformation matrix
Prior art date
Application number
PCT/CN2023/090821
Other languages
English (en)
French (fr)
Inventor
马文辉
潘东伟
吴阳平
许亮
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023231653A1 publication Critical patent/WO2023231653A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present disclosure relates to but is not limited to the field of camera technology, and in particular, to a calibration method and device for a vehicle-mounted camera, computer equipment, storage media and products.
  • DMS Driver Monitoring System
  • the embodiments of the present disclosure are expected to provide a calibration method and device for a vehicle-mounted camera, computer equipment, storage media and products.
  • an embodiment of the present disclosure provides a calibration method for a vehicle-mounted camera.
  • the method includes:
  • a second transformation matrix of the current position of the vehicle camera relative to the cabin coordinate system is determined.
  • an embodiment of the present disclosure provides a calibration device for a vehicle-mounted camera.
  • the device includes:
  • the first acquisition part is configured to acquire the first position information of the current position of the vehicle-mounted camera relative to the reference position; wherein the vehicle-mounted camera is arranged on the steering wheel column;
  • a first determination part configured to determine a first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera
  • the second acquisition part is configured to acquire the third position information of the reference position in the cabin coordinate system and the rotation angle information of the steering wheel column in the reference position;
  • a second determination part configured to determine a second transformation matrix of the current position of the vehicle-mounted camera relative to the cabin coordinate system based on the first transformation matrix, the third position information and the rotation angle information.
  • embodiments of the present disclosure provide a computer device, including: a processor; a memory for storing instructions executable by the processor;
  • the processor is configured to perform the method described in the first aspect.
  • embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored. When the computer program is executed by a processor, the method described in the first aspect is implemented.
  • inventions of the present disclosure provide a computer program product.
  • the computer program product includes a computer program or instructions. When the computer program or instructions are run on a computer, the computer is caused to perform the steps in the first aspect. the method described.
  • a vehicle-mounted camera is installed on the steering wheel column, and a reference position is preset as an intermediate item.
  • the current position of the vehicle-mounted camera is first mapped to the reference position, and then based on the reference position, the vehicle camera is mapped to the reference position in the cabin coordinate system.
  • the third position information and the rotation angle information of the steering wheel column at the reference position complete the conversion of the current position of the vehicle-mounted camera relative to the cabin coordinate system, thereby enabling the calibration of the vehicle-mounted camera with variable position based on the present disclosure, improving the performance of DMS such as The accuracy of the driver's status assessment in the system.
  • Figure 1 is a flow chart of a calibration method for a vehicle-mounted camera according to an embodiment of the present disclosure
  • Figure 2 is an example diagram of mechanical information of a steering wheel column in an embodiment of the present disclosure
  • Figures 3 and 4 are three-dimensional data example diagrams of the rotation center point of the steering wheel column in the cabin coordinate system in the embodiment of the present disclosure
  • Figures 5 and 6 are example diagrams of three-dimensional data of the reference position in the cabin coordinate system in the embodiment of the present disclosure
  • Figure 7 is a schematic diagram of a calibration method for a vehicle-mounted camera in an embodiment of the present disclosure
  • Figure 8 is an example diagram of a camera calibration device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a hardware entity of a computer device in an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide a camera calibration method.
  • the execution subject may be a camera calibration device.
  • the camera calibration device may be a computer device such as a server, a laptop computer, or a vehicle-mounted device.
  • the vehicle-mounted device may be a vehicle machine in the vehicle cabin, or a device host in the vehicle that can be used to perform data processing operations such as images, etc. This is not limited in the embodiments of the present disclosure.
  • FIG. 1 is a flow chart of a calibration method for a vehicle-mounted camera according to an embodiment of the present disclosure. As shown in Figure 1, the calibration method for a vehicle-mounted camera includes the following steps:
  • the steering wheel on the vehicle can change the height and rotation angle of the steering wheel through human manipulation.
  • the height and rotation angle of the steering wheel are also changed based on the length and rotation angle of the steering wheel column. For example, based on the change in the length of the steering wheel column, the position of the steering wheel in the length direction of the column changes, and based on the change in the rotation angle of the steering wheel column, the steering wheel is rotatable relative to the rotation center point of the column.
  • the vehicle-mounted camera is located on the steering wheel column, the position of the vehicle-mounted camera changes based on the position of the steering wheel column.
  • a reference position of the vehicle camera is preset.
  • the reference position may refer to the position of the steering wheel column at a preset length and a preset rotation angle.
  • the preset length may be the pipe when the vehicle leaves the factory.
  • the length of the column, and the preset rotation angle can be the angle of the column when the steering wheel is in the specified position, for example, the angle with the plane of the bottom of the cabin is 45 degrees.
  • the camera calibration device obtains first position information of the vehicle-mounted camera at its current position relative to a reference position.
  • the current position may be a position determined based on the position of the steering wheel column while the vehicle is driving.
  • step S12 the camera calibration device determines a first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera. Assuming that the first transformation matrix of the current position relative to the reference position is M, then the reference position can be obtained by multiplying the current position and M, that is, the first transformation matrix is the mapping matrix from the current position to the reference position.
  • the position of the vehicle-mounted camera is variable, after the position of the vehicle-mounted camera changes, its position in the cabin coordinate system needs to be calibrated to analyze the situation in the vehicle based on the in-vehicle images collected by the vehicle-mounted camera.
  • the length of the steering column column when the vehicle-mounted camera is at the current position may be different from the length of the steering wheel column when the vehicle-mounted camera is at the reference position, and the rotation angle of the steering wheel column when the vehicle-mounted camera is at the current position may be different from the length of the steering wheel column when the vehicle-mounted camera is at the current position.
  • the first transformation matrix includes translation and rotation information of the current position relative to the reference position.
  • the camera calibration device obtains the third position information of the reference position in the cabin coordinate system and the rotation angle information of the steering wheel column in the reference position.
  • the cabin coordinate system is a pre-established cabin coordinate system with a fixed position in the cabin as the origin of the cabin.
  • the origin of the cabin is, for example, the place where the glasses case is placed in the cabin, or the center of the vehicle central control display. etc., the embodiments of this disclosure are not limiting.
  • the third position information of the reference position in the cabin coordinate system belongs to mechanical design information.
  • the position information of the reference position in the cabin coordinate system ie, the third position information
  • the steering wheel tube in the reference position The column's rotation angle information may be different.
  • the camera calibration device of the present disclosure can directly read the third position information of the corresponding vehicle model and the rotation angle information of the steering wheel column at the reference position.
  • step S14 the camera calibration device determines the second transformation matrix of the current position of the vehicle-mounted camera relative to the cabin coordinate system based on the first transformation matrix, the third position information and the rotation angle information.
  • the second transformation matrix is a mapping matrix from the current position of the vehicle camera to the vehicle cabin coordinate system.
  • the vehicle-mounted camera based on the embodiment of the present disclosure detects the driver's status, for example, detects the driver's line of sight area
  • the positions of both pupils in the driver's face image collected by the vehicle-mounted camera at the current location can be first positioned. , and then determine the position of the pupil center of the left eye and the right eye in the cabin coordinate system according to the internal parameters of the vehicle camera and the calibration matrix determined by the present disclosure, thereby detecting the driver's sight direction and determining where the sight direction falls in the cabin. point to obtain the driver's sight area.
  • the detection of driver's nodding, gestures, blinking and other actions it is necessary to rely on a fixed coordinate system to analyze the acquired video to detect the corresponding actions.
  • the position of the human head in the camera coordinate system of the vehicle-mounted camera at the current position is obtained, and then multiplied by the present disclosure
  • the second transformation matrix is used to obtain the position of the head in the cabin. Based on the position of the head in the cabin determined based on consecutive multiple frames of images, it is possible to determine whether the driver nods. If the driver nods, it can be judged that the driver is driving fatigued, etc., and prompt information may need to be output to enable the driver to drive safely.
  • the position of the vehicle-mounted camera is fixed, for example, it is installed on the "A" pillar.
  • the "A” pillar refers to the connecting pillar between the windshield and the left and right front doors.
  • the camera installed on the "A” pillar mostly captures the driver's side, so errors are prone to occur when analyzing the driver's facial expression or driver's behavior in the DMS system based on the obtained driver's side image. This leads to poor accuracy in assessing the driver's status, and the on-board camera located on the "A" pillar will also affect the appearance of the car interior.
  • this disclosure installs the vehicle-mounted camera on the steering wheel column, causing less interference to the people in the vehicle, and uses the preset reference position as an intermediary to first map the current position of the vehicle-mounted camera to the reference position in a two-step process. position, and then based on the third position information of the reference position in the cabin coordinate system and the rotation angle information of the steering wheel column in the reference position, the conversion of the current position of the vehicle-mounted camera relative to the cabin coordinate system is completed, thereby enabling the present disclosure to be implemented Calibration of vehicle-mounted cameras with variable positions improves the accuracy of driver status assessment in DMS systems.
  • obtaining the first position information of the vehicle-mounted camera at the current position relative to the reference position includes:
  • the telescopic length data of the steering wheel column is the length of the steering wheel column corresponding to the current position of the vehicle-mounted camera, and the length change data relative to the preset length of the steering wheel column corresponding to the reference position of the vehicle-mounted camera;
  • the deflection angle data of the column is the angle of the steering wheel column corresponding to the current position of the vehicle-mounted camera, and the angle change data relative to the preset rotation angle of the steering wheel column corresponding to the reference position of the vehicle-mounted camera.
  • the mechanical signal may include the length and rotation angle of the steering wheel column
  • the camera calibration device may be based on the current position of the vehicle camera.
  • the length of the steering wheel column corresponding to the position and the length of the steering wheel column corresponding to the reference position of the on-board camera are calculated to obtain the telescopic length data of the steering wheel column.
  • the deflection angle data of the steering wheel column is calculated and obtained.
  • the mechanical signal of the steering wheel column can be obtained based on the existing positioning device in the vehicle for controlling the driving direction of the vehicle.
  • the positioning device of the vehicle's driving direction can determine the vehicle's steering radius based on the length of the steering wheel column, and can control the actual steering angle of the vehicle based on the rotation angle of the steering wheel column.
  • the vehicle control system can be based on the vehicle's steering radius and steering angle. jointly determine the direction of travel of the vehicle.
  • the mechanical signal may also directly include telescopic length data and deflection angle data of the steering wheel column.
  • the steering wheel column has a rotation center point, and the telescopic length and deflection angle of the steering wheel column can be determined relative to the rotation center point.
  • FIG. 2 is an example diagram of mechanical information of a steering wheel column in an embodiment of the present disclosure.
  • 21 is the steering wheel column
  • point O is the rotation center point
  • 22 is the vehicle camera located at the reference position.
  • 23 identifies the vehicle camera located at the current location.
  • the telescopic length data of the steering wheel column is ⁇ L shown in FIG. 2
  • the deflection angle data of the steering wheel column is ⁇ shown in FIG. 2 .
  • the mechanical signal of the steering wheel column can be obtained based on, for example, an existing positioning device in the vehicle for controlling the driving direction of the vehicle without additional complicated calculations, it is advantageous to the vehicle-mounted signal of the present disclosure. Camera calibration efficiency.
  • determining the first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera includes:
  • the rigid body transformation includes transformation of translation and rotation.
  • the lens surface may not be parallel to the axis of the steering wheel column, that is, the vehicle-mounted camera may have a certain elevation or depression angle relative to the axis of the steering wheel column. Therefore, when performing rigid body transformation, it is also possible to The angle of the lens surface of the vehicle-mounted camera relative to the axis of the steering wheel column is introduced to determine the first transformation matrix more accurately to adapt to the calibration of vehicle-mounted cameras with different installation methods.
  • the method further includes:
  • the first transformation matrix is updated based on the mapping relationship matrix.
  • the camera calibration device acquires the first image collected by the vehicle-mounted camera at the current position and the second image collected by the vehicle-mounted camera at the reference position, and then identifies and matches the characteristic elements of the first image and the second image.
  • the feature element can be a specific type of object or feature point in the image, such as the "B" pillar, the roof boundary point, etc., where the "B" pillar is also called the central pillar and is located between the front door and the rear door of the car.
  • the features in each image can be extracted through a neural network model, and the features in the first image and the features in the second image can be matched, and then the features in the first image and the second image are correspondingly matched based on the matching degree greater than the preset matching threshold.
  • the position of the image determines the mapping relationship matrix between the first image and the second image.
  • the present disclosure can first preset a transformation matrix, such as a 3*3 zero matrix, and use this preset transformation matrix to match the matching degree greater than the predetermined value.
  • a transformation matrix such as a 3*3 zero matrix
  • the feature with the matching threshold corresponds to the position in the first image (the first position) for mapping
  • the feature with a matching degree greater than the preset matching threshold corresponds to the position in the second image (the second position) as the mapping target
  • continuously Adjust the element values in the preset transformation matrix until the product of the preset transformation matrix and the first position is the same as the second position, and what is obtained at this time is the mapping relationship matrix.
  • the first transformation matrix can be updated based on the mapping relationship matrix.
  • the matrix obtained by adding the weighted addition of the mapping relationship matrix and the first transformation matrix may be determined as the updated first transformation matrix.
  • the weight setting can be preset based on experience. The sum of the weights of the mapping relationship matrix and the first transformation matrix is 1. The larger the weight, the higher the importance of the corresponding matrix.
  • the first transformation matrix is updated after determining the mapping relationship matrix based on the visual image information. Since the first transformation matrix is obtained based on the mechanical signal, and the mapping relationship matrix is obtained based on the visual signal, in this embodiment , can compensate for errors in mechanical signals due to assembly, for example, through visual signals, thereby achieving a higher-precision determination of the first transformation matrix, thereby improving the calibration accuracy of the vehicle-mounted camera.
  • obtaining the first position information of the current position of the vehicle-mounted camera relative to the reference position includes:
  • the first position information as a deviation of the position of the target object in the first image relative to a position in the second image
  • Determining the first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera includes:
  • the first transformation matrix is determined based on the deviation.
  • the first image collected by the vehicle-mounted camera at the current position and the second image collected by the vehicle-mounted camera at the reference position acquired by the camera calibration device may be images including the same target object with a fixed position in the vehicle cabin.
  • the same target object with a fixed position in the vehicle cabin may be the sunroof frame or "B" pillar in the vehicle cabin.
  • the present disclosure performs target detection on the first image and the second image respectively. After determining the position of the target object in the first image and the position in the second image, the position of the target object in the first image and the second image can be determined. The deviation of the position is used as the first position information.
  • the position deviation may be due to the current position of the vehicle camera. It is caused by the translation and rotation of the position relative to the reference position, so the position deviation also includes translation information and rotation information.
  • the first transformation matrix determined based on the position deviation also includes translation information and rotation information.
  • the first transformation matrix of the current position relative to the reference position may be determined based on the deviation of the position multiplied by the intrinsic parameter matrix of the camera.
  • obtaining the third position information of the reference position in the cabin coordinate system includes:
  • the third position information of the reference position in the cabin coordinate system is determined based on the fourth position information and the fifth position information.
  • Figures 3 and 4 are three-dimensional data example diagrams of the rotation center point of the steering wheel column in the cabin coordinate system in the embodiment of the present disclosure.
  • 31 is the origin of the cabin coordinates
  • 32 is the steering wheel tube.
  • the fourth position is the distance Z1 between the rotation center point of the column, the rotation center point 32 of the steering wheel column and the cabin origin 31 in the height direction, the distance Y1 in the direction of the front axis of the vehicle, and the distance X1 in the forward direction of the vehicle.
  • the direction of the front axis of the vehicle can refer to the direction between the two front wheels when the vehicle is traveling forward;
  • the height direction can refer to the direction perpendicular to the direction of the front axis of the vehicle.
  • the forward direction of the vehicle can be the direction in which the vehicle is forward. In the forward direction of the vehicle, the distance between the front seat and the rear seat in the vehicle can be measured.
  • the position information of the center point of the vehicle-mounted camera 22 at the reference position relative to the rotation center point O of the steering wheel column in FIG. 2 is the fifth position information of the reference position relative to the rotation center point of the steering wheel column.
  • This disclosure is based on the fourth position information and the fifth position information, which can determine the third position information of the reference position in the cabin coordinate system.
  • the rotation center point of the steering wheel column is used as an intermediate transformation to determine the reference position.
  • the third position information in the cabin coordinate system since the reference position is also a predetermined position, the third position information can be determined directly based on the positional relationship between the reference position in the mechanical design information and the cabin origin of the cabin coordinate system.
  • Figures 5 and 6 are three-dimensional data example diagrams of the reference position in the cabin coordinate system in the embodiment of the present disclosure. As shown in Figures 5 and 6, 31 is the cabin coordinate origin, 33 is the reference position, and the reference position 33 is relative to The distance Z2 in the height direction of the cabin origin 31, the distance Y2 in the direction of the front axis of the vehicle, and the distance X2 in the forward direction of the vehicle are the third position information.
  • the second transformation matrix of the current position of the vehicle-mounted camera relative to the cabin coordinate system is determined based on the first transformation matrix, the third position information and the rotation angle information.
  • the third position information and the rotation angle information determine the translation matrix and rotation matrix of the reference position relative to the cabin coordinate system
  • the translation matrix and the rotation matrix are combined and multiplied by the first transformation matrix to obtain the second transformation matrix.
  • the translation matrix of the reference position relative to the cabin coordinate system can be determined based on the third position information.
  • the translation matrix is a matrix composed of X2, Y2 and Z2.
  • three-dimensional space modeling can be performed to determine the connection between the reference position and the cabin origin of the cabin coordinate system, and the intersection with each coordinate plane of the cabin coordinate system. angle, obtain the active rotation matrix of the connecting line around each axis of the cabin coordinate system, and multiply the active rotation matrices of the connecting line around each axis to obtain the rotation matrix of the present disclosure.
  • the first transformation matrix includes the translation and rotation information of the current position relative to the reference position.
  • the translation matrix and rotation can be Matrix combination, for example, if the translation matrix is B and the rotation matrix is D, then the matrix [B, D] including the translation and rotation information of the reference position relative to the cabin coordinate system can be obtained after combination, and [B, D] multiplied by the first rotation matrix M of the current position relative to the reference position, the second transformation matrix can be obtained.
  • the second transformation matrix is E, and E is [B, D]*M.
  • FIG. 7 is a schematic diagram of a calibration method of a vehicle-mounted camera in an embodiment of the present disclosure.
  • the camera calibration method of the present disclosure includes two steps.
  • the first step is to move the real-time position of the vehicle-mounted camera marked by 71 to the mark 72.
  • the reference position of the vehicle-mounted camera is used for mapping.
  • the mechanical telescopic and rotation signals of the steering wheel column can be used, and the photos of the interior of the cabin taken by the vehicle-mounted camera can also be used.
  • the mechanical expansion and contraction signal of the steering wheel column includes the expansion and contraction length data of the steering wheel column mentioned in this disclosure, and the rotation signal includes the deflection angle data mentioned in this disclosure;
  • the photos of the interior of the cabin taken by the vehicle-mounted camera include the data of the expansion and contraction length of the steering wheel column mentioned in this disclosure.
  • the first transformation matrix of the present disclosure is obtained through the above mapping of the first image and the second image.
  • the second step is to map the reference position of the vehicle camera identified by 72 to the passenger car cabin spatial coordinate system identified by 73.
  • the cabin mechanical design digital model can be used, for example, the steering wheel mentioned in this disclosure can be used.
  • the second transformation matrix of the real-time position of the vehicle-mounted camera relative to the cabin space coordinate system in the embodiment of the present disclosure can be obtained. It should be noted that since the matrix transformation is reversible, if the position of the vehicle camera relative to the cabin space coordinate system is obtained in advance, the real-time position of the vehicle camera, that is, the steering wheel column, can be obtained based on the second transformation matrix real-time location.
  • FIG. 8 shows an example diagram of a camera calibration device provided by an embodiment of the present disclosure.
  • the camera calibration device 800 includes:
  • the first acquisition part 801 is configured to acquire the first position information of the current position of the vehicle-mounted camera relative to the reference position; wherein the vehicle-mounted camera is arranged on the steering wheel column;
  • the first determining part 802 is configured to determine a first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera;
  • the second acquisition part 803 is configured to acquire the third position information of the reference position in the cabin coordinate system and the rotation angle information of the steering wheel column in the reference position;
  • the second determination part 804 is configured to determine a second transformation of the current position of the vehicle-mounted camera relative to the cabin coordinate system based on the first transformation matrix, the third position information and the rotation angle information. matrix.
  • the first acquisition part 801 is configured to read the mechanical signal of the steering wheel column, obtain the telescopic length data and deflection angle data of the steering wheel column, and convert the Telescopic length data and deflection angle data serve as the first position information.
  • the first determining part 802 is configured to perform rigid body transformation based on the telescopic length data, the deflection angle data and a preset angle to obtain the first transformation matrix; wherein, the preset angle Let the angle be the angle of the lens surface of the vehicle-mounted camera relative to the axis of the steering wheel column.
  • the device further includes:
  • a third acquisition part configured to acquire the first image collected by the vehicle-mounted camera at the current position, and the second image collected by the vehicle-mounted camera at the reference position;
  • the third determination part is configured to perform a processing on the first image and the second image after determining the first transformation matrix of the current position relative to the reference position based on the first position information of the vehicle-mounted camera.
  • a fourth determination part configured to determine a mapping relationship matrix between the first image and the second image based on the matching result of the characteristic elements in the first image and the second image;
  • the updating part is configured to update the first transformation matrix based on the mapping relationship matrix.
  • the first acquisition part 801 is configured to perform target detection on the first image and the second image respectively, and determine the position of the target object in the first image and the location of the target object in the first image. The position in the second image; determining the first position information as the deviation of the position of the target object in the first image relative to the position in the second image;
  • the first determining part 802 is configured to determine the first transformation matrix based on the deviation.
  • the second acquisition part 802 is configured to acquire the fourth position information of the rotation center point of the steering wheel column in the cabin coordinate system; acquire the reference position relative to the The fifth position information of the rotation center point of the steering column column; and the third position information of the reference position in the cabin coordinate system is determined based on the fourth position information and the fifth position information.
  • the second determining part 804 is configured to determine a translation matrix and a rotation matrix of the reference position relative to the cabin coordinate system based on the third position information and the rotation angle information. ; Combine the translation matrix and the rotation matrix and multiply them with the first transformation matrix to obtain the second transformation matrix.
  • the vehicle camera calibration method is implemented in the form of a software function module and is sold or used as an independent product, it can also be stored in a computer-readable storage medium.
  • the software product is stored in a storage medium and includes a number of instructions to enable a A computer device (which may be a personal computer, a server, a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program code. As such, disclosed embodiments are not limited to any specific combination of hardware and software.
  • embodiments of the present disclosure provide a computer device, including a memory and a processor.
  • the memory stores a computer program that can be run on the processor.
  • the processor executes the program, the steps in the above method are implemented.
  • inventions of the present disclosure provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the above method are implemented.
  • the computer-readable storage medium may be a tangible device that holds and stores instructions for use by an instruction execution device, and may be a volatile storage medium or a non-volatile storage medium.
  • the computer-readable storage medium may be, for example, but not limited to: an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Erasable programmable read-only memory Programmable Read Only Memory (EPROM or flash memory), Static Random-Access Memory (SRAM), Portable Compact Disk Read Only Memory (Compact Disk Read Only Memory, CD-ROM), Digital versatile disk (Digital versatile Disc, DVD), memory stick, floppy disk, mechanical encoding device, such as a punched card or a raised structure in a groove with instructions stored thereon, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable Erasable programmable read-only memory Programmable Read Only Memory
  • SRAM Static Random-Access Memory
  • CD-ROM Portable Compact Disk Read Only Memory
  • Digital versatile disk Digital versatile Disc, DVD
  • memory stick floppy disk
  • mechanical encoding device such as a punched card or a raised structure in a groove with instructions
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber optic cables), or through electrical wires. transmitted electrical signals.
  • inventions of the present disclosure provide a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium storing a computer program. When the computer program is read and executed by a computer, the above method is implemented. some or all of the steps.
  • the computer program product can be implemented specifically through hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a Software Development Kit (SDK), etc. wait.
  • SDK Software Development Kit
  • part may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a unit, module, or non-modular.
  • Figure 9 is a schematic diagram of a hardware entity of a computer device in an embodiment of the present disclosure.
  • the hardware entity of the computer device 900 includes: a processor 901, a communication interface 902 and a memory 903, where:
  • Processor 901 generally controls the overall operation of computer device 900 .
  • Communication interface 902 can enable the computer device to communicate with other terminals or servers through a network.
  • the memory 903 is configured to store instructions and applications executable by the processor 901, and can also cache data to be processed or processed by the processor 901 and each module in the computer device 900 (for example, image data, audio data, voice communication data and Video communication data), which can be implemented through flash memory (FLASH) or random access memory (Random Access Memory, RAM). Data can be transmitted between the processor 901, the communication interface 902 and the memory 903 through the bus 904.
  • data can be transmitted between the processor 901, the communication interface 902 and the memory 903 through the bus 904.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be electrical, mechanical, or other forms. of.
  • the units described above as separate components may or may not be physically separate, as The components shown in the unit may or may not be physical units; they may be located in one place or distributed to multiple network units; some or all of the units may be selected according to actual needs to achieve the purpose of this embodiment. .
  • each functional unit in each embodiment of the present disclosure can be all integrated into one processing unit, or each unit can be separately used as a unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the execution includes: The steps of the above method embodiment; and the aforementioned storage media include: various media that can store program codes, such as mobile storage devices, ROMs, magnetic disks, or optical disks.
  • the above-mentioned integrated units of the present disclosure are implemented in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the technical solution of the present disclosure can be embodied in the form of a software product in essence or that contributes to related technologies.
  • the computer software product is stored in a storage medium and includes a number of instructions to enable a computer.
  • a computer device (which may be a personal computer, a server, a network device, etc.) executes all or part of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: mobile storage devices, ROMs, magnetic disks or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本公开实施例公开了一种车载相机的标定方法及装置、计算机设备、存储介质和产品,所述方法包括:获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。

Description

一种车载相机的标定方法及装置、计算机设备、存储介质和产品
相关申请的交叉引用
本公开基于申请号为202210616046.0、申请日为2022年05月31日、申请名称为“一种车载相机的标定方法及装置、计算机设备、存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于相机技术领域,尤其涉及一种车载相机的标定方法及装置、计算机设备、存储介质和产品。
背景技术
随着不同品牌不同车型对车舱内饰美化及个性化的追求,车载相机的安装位置也愈加灵活,例如从车舱内固定位置变为可变位置。基于位置可变的车载相机,如何应用于如驾驶员监测***(Driver Monitoring System,DMS)来判断驾驶员的状态,还有待进一步解决。
发明内容
本公开实施例期望提供一种车载相机的标定方法及装置、计算机设备、存储介质和产品。
第一方面,本公开实施例提供一种车载相机的标定方法,所述方法包括:
获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
第二方面,本公开实施例提供一种车载相机的标定装置,所述装置包括:
第一获取部分,被配置为获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
第一确定部分,被配置为基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
第二获取部分,被配置为获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
第二确定部分,被配置为根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
第三方面,本公开实施例提供一种计算机设备,包括:处理器;用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行第一方面中所述的方法。
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现第一方面中所述的方法。
第五方面,本公开实施例提供一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行第一方面中所述的方法。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开将车载相机安装于方向盘管柱上,并预设基准位置作为中间项,通过两步走的方式,先将车载相机的当前位置映射到基准位置,再基于基准位置在车舱坐标系下的第三位置信息和基准位置下方向盘管柱的旋转角信息,完成车载相机的当前位置相对于车舱坐标系的转换,从而使得能基于本公开位置可变的车载相机的标定,提升如DMS***中对驾驶员的状态评估的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的, 并不能限制本公开。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例中所需要使用的附图进行说明。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1为本公开实施例示出的一种车载相机的标定方法流程图;
图2为本公开实施例中一种方向盘管柱的机械信息示例图;
图3和图4为本公开实施例中方向盘管柱的旋转中心点在车舱坐标系中的三维数据示例图;
图5和图6为本公开实施例中基准位置在车舱坐标系中的三维数据示例图;
图7为本公开实施例中一种车载相机的标定方法原理图;
图8为本公开实施例提供的一种相机标定装置示例图;
图9为本公开实施例中计算机设备的一种硬件实体示意图。
具体实施方式
以下结合说明书附图及具体实施例对本公开的技术方案做进一步的详细阐述。
本公开实施例提供一种相机标定方法,其执行主体可以是相机标定装置,该相机标定装置可以是服务器、笔记本电脑等计算机设备,还可以是车载设备。其中,车载设备可以是车舱内的车机、或车内可用于执行图像等数据处理操作的设备主机等,本公开实施例对此不作限定。
图1为本公开实施例示出的一种车载相机的标定方法流程图,如图1所示,车载相机的标定方法包括以下步骤:
S11、获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
S12、基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
S13、获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
S14、根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
在本公开实施例中,车辆上的方向盘经人为操控可改变方向盘的高度和旋转角度,方向盘的高度和旋转角度的改变也基于方向盘管柱的长度和旋转角度可变。例如,基于方向盘管柱的长度变化而使得方向盘在管柱的长度方向上的位置变化,基于方向盘管柱的旋转角度变化而使得方向盘相对于管柱的旋转中心点可旋转。当车载相机位于方向盘管柱上时,车载相机的位置会基于方向盘管柱的位置变化而变化。
本公开实施例中,会预设一个车载相机的基准位置,该基准位置可以是指方向盘管柱在预设长度以及预设旋转角度下的位置,其中,预设长度可以是车辆出厂时的管柱的长度,预设旋转角度可以是方向盘在指定位置时的管柱角度,例如与车舱底部平面夹角为45度。
在步骤S11中,相机标定装置会获取车载相机在当前位置相对于基准位置的第一位置信息,该当前位置可以是基于车辆在行驶过程中方向盘管柱的位置而确定的位置。
在步骤S12中,相机标定装置会基于车载相机的第一位置信息,确定当前位置相对于基准位置的第一变换矩阵。假设当前位置相对于基准位置的第一变换矩阵为M,则将当前位置与M相乘即可得到基准位置,也即第一变换矩阵是当前位置到基准位置的映射矩阵。
本公开实施例中,由于车载相机位置可变,在车载相机位置变化后需标定其在车舱坐标系下的位置,以基于车载相机采集的车内图像对车内情况进行分析。车载相机在当前位置时方向盘管柱的长度可能与车载相机在基准位置时方向盘管柱的长度不同,且车载相机在当前位置时方向盘管柱的旋转角度可能与 车载相机在基准位置时方向盘管柱的旋转角度不同,因此可以理解的是,第一变换矩阵中包括当前位置相对于基准位置的平移和旋转信息。
在步骤S13中,相机标定装置会获取基准位置在车舱坐标系下的第三位置信息和基准位置下方向盘管柱的旋转角信息。其中,车舱坐标系是以车舱内某一固定位置为车舱原点而预先建立的车舱坐标系,车舱原点例如是车舱内放置眼镜盒的地方,或者车载中控显示屏中心位置等,本公开实施例不做限制。
本公开中,基准位置在车舱坐标系下的第三位置信息属于机械设计信息,对于不同车型,车舱坐标系下基准位置的位置信息(即第三位置信息)、以及基准位置下方向盘管柱的旋转角信息可能不同。本公开的相机标定装置可直接读取对应车型的第三位置信息以及基准位置下方向盘管柱的旋转角信息。
在步骤S14中,相机标定装置根据第一变换矩阵、第三位置信息以及旋转角信息,即可确定车载相机的当前位置相对于车舱坐标系的第二变换矩阵。本公开实施例中,假设车载相机的当前位置相对于车舱坐标系的第二矩阵为E,则将车载相机的当前位置与E相乘即可得到车载相机在车舱坐标系中的位置,也即第二变换矩阵是车载相机的当前位置向车舱坐标系的映射矩阵。
若基于本公开实施例的车载相机对驾驶员的状态进行检测,例如,检测驾驶员的视线区域,则可先对车载相机在当前位置采集的驾驶员的人脸图像中双眼瞳孔的位置进行定位,然后根据车载相机的内参数以及本公开确定的标定矩阵,确定左眼和右眼的瞳孔中心在车舱坐标系的位置,从而检测驾驶员的视线方向,判断视线方向在车舱内的落点,获得驾驶员的视线区域。再例如,对于驾驶员点头、手势、眨眼等动作的检测,需要依赖固定坐标系对获取的视频进行分析来检测相应动作。以点头的动作为例,通过检测到视频的图像帧中人头的位置后,乘以相机的内参矩阵,获得人头在当前位置的车载相机的相机坐标系下的位置,然后再乘以本公开的第二变换矩阵,就得到人头在车舱中的位置,基于连续多帧图像确定的人头在车舱中的位置即可确定驾驶员是否存在点头的动作。若驾驶员存在点头动作,可判断驾驶员疲劳驾驶等,可能需要输出提示信息以使驾驶员安全驾驶。
相关技术中,车载相机的位置是固定的,例如安装在“A”柱,“A”柱是指挡风玻璃和左右前车门的连接柱。安装于“A”柱上的相机拍摄的多为驾驶员的侧面,因而在DMS***中基于获取的驾驶员侧面图像,分析驾驶员的面部表情或对驾驶员的行为进行分析时容易发生错误,导致对驾驶员的状态评估准确性差,且位于“A”柱上的车载相机也会影响车内美观。
相对的,本公开将车载相机安装于方向盘管柱上,对车内人员的干扰较小,并且通过预设基准位置作为中介,通过两步走的方式,先将车载相机的当前位置映射到基准位置,再基于基准位置在车舱坐标系下的第三位置信息和基准位置下方向盘管柱的旋转角信息,完成车载相机的当前位置相对于车舱坐标系的转换,从而使得能基于本公开位置可变的车载相机的标定,提升如DMS***中对驾驶员的状态评估的准确性。
在一些实施例中,所述获取车载相机在当前位置相对于基准位置的第一位置信息,包括:
读取所述方向盘管柱的机械信号,得到所述方向盘管柱的伸缩长度数据和偏转角数据,将所述方向盘管柱的伸缩长度数据和偏转角数据作为所述第一位置信息。
本公开实施例中,方向盘管柱的伸缩长度数据即车载相机在当前位置对应的方向盘管柱的长度,相对于车载相机在基准位置对应的方向盘管柱的预设长度的长度变化数据;方向盘管柱的偏转角数据即车载相机在当前位置对应的方向盘管柱的角度,相对于车载相机在基准位置对应的方向盘管柱的预设旋转角度的角度变化数据。
本公开的相机标定装置在基于方向盘管柱的机械信号来获取第一位置信息时,在一些实施例中,机械信号可包括方向盘管柱的长度以及旋转角度,相机标定装置可基于车载相机的当前位置对应的方向盘管柱的长度以及车载相机的基准位置对应的方向盘管柱的长度,计算获得方向盘管柱的伸缩长度数据。同理,基于车载相机的当前位置对应的方向盘管柱的旋转角度以及车载相机的基准位置对应的方向盘管柱的旋转角度,计算获得方向盘管柱的偏转角数据。
本公开实施例中,方向盘管柱的机械信号可基于车内已有的用于控制车辆行驶方向的定位装置获得。例如,车辆行驶方向的定位装置基于方向盘管柱的长度可以确定车辆转向的半径,基于方向盘管柱的旋转角度可以控制车辆实际的转向角度,车辆的控制***基于车辆转向的半径以及转向角度即可共同确定车辆的行驶方向。
在另一些实施例中,机械信号还可直接包括方向盘管柱的伸缩长度数据和偏转角数据。
需要说明的是,本公开实施例中,方向盘管柱有旋转中心点,方向盘管柱的伸缩长度和偏转角可以是相对旋转中心点来确定的。
图2为本公开实施例中一种方向盘管柱的机械信息示例图,如图2所示,21标识的为方向盘管柱,O点为旋转中心点,22标识的为位于基准位置的车载相机,23标识的为位于当前位置的车载相机。本公开中,方向盘管柱的伸缩长度数据即为图2中所示的ΔL,方向盘管柱的偏转角数据即为图2中所示的Δθ。
可以理解的是,在该实施例中,由于方向盘管柱的机械信号例如可基于车内已有的用于控制车辆行驶方向的定位装置获得,而无需额外进行复杂计算获得,因而利于本公开车载相机的标定效率。
在一些实施例中,所述基于所述车载相机第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵,包括:
基于所述伸缩长度数据、所述偏转角数据以及预设角度进行刚体变换,得到所述第一变换矩阵;其中,所述预设角度为所述车载相机的镜头面相对于所述方向盘管柱轴线的角度。
在本公开实施例中,刚体变换包括平移和旋转的变换。在该实施例中,考虑到车载相机在安装时,镜头面可能并非与方向盘管柱轴线平行,即车载相机相对方向盘管柱轴线可能有一定的仰角或俯角,因此在进行刚体变换时,还可引入车载相机的镜头面相对方向盘管柱轴线的角度,以更加准确地确定第一变换矩阵,以适应安装方式不同的车载相机的标定。
可以理解的是,在该实施例中,结合车载相机的镜头面相对管柱轴线的预 设角度,由于预设角度是在车载相机安装时就确定好的,无需实时计算获得,因而基于伸缩长度数据、偏转角数据以及预设角度进行刚体变换确定第一变换矩阵的方式,在不影响效率的基础上还能适用于不同的车载相机的安装方式,可以提升本公开标定方案的普适性。
在一些实施例中,所述方法还包括:
获取所述车载相机在当前位置采集的第一图像,以及所述车载相机在所述基准位置采集的第二图像;
在基于所述车载相机第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵之后,对所述第一图像和所述第二图像进行特征元素识别和匹配;
根据所述第一图像和所述第二图像中的特征元素的匹配结果,确定所述第一图像与所述第二图像的映射关系矩阵;
基于所述映射关系矩阵更新所述第一变换矩阵。
在该实施例中,相机标定装置获取车载相机在当前位置采集的第一图像,以及车载相机在基准位置采集的第二图像后,对第一图像和第二图像进行特征元素识别和匹配。其中,特征元素可以是图像中特定类型的对象或特征点,例如“B”柱、车顶棚边界点等等,其中“B”柱也称为中央柱,位于车前门和后门之间。例如可通过神经网络模型提取各图像中的特征,并对第一图像中的特征和第二图像中的特征进行匹配,随后基于匹配度大于预设匹配阈值的特征对应在第一图像和第二图像的位置,确定第一图像和第二图像的映射关系矩阵。
需要说明的是,本公开在确定第一图像和第二图像的映射关系矩阵时,可先预设一个变换矩阵,例如一个3*3的零矩阵,利用该预设变换矩阵对匹配度大于预设匹配阈值的特征对应在第一图像中的位置(第一位置)做映射,并以匹配度大于预设匹配阈值的特征对应在第二图像中的位置(第二位置)为映射目标,不断调整预设变换矩阵中的元素值,直至预设变换矩阵与第一位置的乘积与第二位置相同,此时得到的即为映射关系矩阵。
在基于第一图像和第二图像中的特征元素的匹配结果,确定第一图像与第 二图像的映射关系矩阵后,即可基于映射关系矩阵更新第一变换矩阵。本公开在基于映射关系矩阵更新第一变换矩阵时,例如,可将映射关系矩阵和第一变换矩阵进行加权后相加得到的矩阵确定为更新后的第一变换矩阵。其中,权重的设置可根据经验预先设置,映射关系矩阵和第一变换矩阵各权重之和为1,权重越大表征对应的矩阵的重要度越高。
可以理解的是,基于视觉的图像信息确定映射关系矩阵后更新第一变换矩阵,由于第一变换矩阵是基于机械信号获得的,而映射关系矩阵是基于视觉信号获得的,因而在该实施例中,能通过视觉信号,补偿机械信号中例如由于组装造成的误差,从而实现更高精度的第一变换矩阵的确定,从而提升车载相机的标定准确度。
在一些实施例中,所述获取车载相机的当前位置相对于基准位置的第一位置信息,包括:
分别对所述第一图像和所述第二图像进行目标检测,确定目标对象在所述第一图像中的位置和在所述第二图像中的位置;
确定所述第一位置信息为所述目标对象在所述第一图像中的位置相对于在所述第二图像中的位置的偏差;
所述基于所述车载相机第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵,包括:
基于所述偏差确定所述第一变换矩阵。
在该实施例中,相机标定装置获取的车载相机在当前位置采集的第一图像,以及车载相机在基准位置采集的第二图像,可以是包括车舱内位置固定的同一目标对象的图像。示例性的,车舱内位置固定的同一目标对象可以是车舱内的天窗框或“B”柱等。
本公开分别对第一图像和第二图像进行目标检测,确定目标对象在第一图像中的位置和在第二图像中的位置后,即可确定目标对象在第一图像和第二图像中的位置的偏差,并将该位置的偏差作为第一位置信息。
需要说明的是,在该实施例中,位置的偏差可能是由于车载相机的当前位 置相对于基准位置的平移和旋转造成的,因而位置的偏差中也包括平移信息和旋转信息,对应的,基于位置的偏差确定的第一变换矩阵也包括平移信息和旋转信息。
本公开实施例在基于位置的偏差确定第一变换矩阵时,例如,可基于位置的偏差乘以相机的内参矩阵确定当前位置相对于基准位置的第一变换矩阵。
可以理解的是,在该实施例中,基于视觉的图像信息中目标对象的检测,并基于检测到的同一目标对象在不同图像中的位置偏差确定第一变换矩阵,可以实现利用机器视觉方式确定第一变换矩阵,方案简单有效。
在一些实施例中,所述获取所述基准位置在车舱坐标系下的第三位置信息,包括:
获取所述方向盘管柱的旋转中心点在所述车舱坐标系下的第四位置信息;
获取所述基准位置相对于所述方向盘管柱的旋转中心点的第五位置信息;
根据所述第四位置信息和所述第五位置信息确定所述基准位置在车舱坐标系下的第三位置信息。
图3和图4为本公开实施例中方向盘管柱的旋转中心点在车舱坐标系中的三维数据示例图,如图3和图4所示,31为车舱坐标原点,32为方向盘管柱的旋转中心点,方向盘管柱的旋转中心点32相对车舱原点31在高度方向上的距离Z1,在车前轴方向上的距离Y1,以及在车前进方向上距离X1即为第四位置信息。其中,车前轴方向可以是指车辆正向行驶时两前轮间的方向;高度方向可以是指与车前轴方向垂直的方向,在高度方向上可测量出车顶距离水平地面的高度;车前进方向可以是车辆正向行驶的方向,在车前进方向上可测量出车辆内前排座驾与后排座驾之间的距离。
此外,图2中基准位置的车载相机22的中心点相对于方向盘管柱的旋转中心点O点的位置信息,即为基准位置相对于方向盘管柱的旋转中心点的第五位置信息。本公开基于第四位置信息和第五位置信息,即可确定基准位置在车舱坐标系下的第三位置信息。
在该实施例中,以方向盘管柱的旋转中心点作中间转化来确定基准位置在 车舱坐标系下的第三位置信息。在另一些实施例中,由于基准位置也是事先确定好的位置,因而可以直接根据机械设计信息中基准位置与车舱坐标系的车舱原点之间的位置关系,确定第三位置信息。
图5和图6为本公开实施例中基准位置在车舱坐标系下的三维数据示例图,如图5和图6所示,31为车舱坐标原点,33为基准位置,基准位置33相对车舱原点31在高度方向上的距离Z2,在车前轴方向上的距离Y2,以及在车前进方向上距离X2即为第三位置信息。
可以理解的是,在该实施例中,由于上述第四位置信息、第五位置信息均为机械设计信息,无需额外进行复杂计算获得,因而利于本公开车载相机的标定效率。
在一些实施例中,所述根据所述第一变换矩阵、所述第三位置信息以及所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵,包括:
根据所述第三位置信息和所述旋转角信息,确定所述基准位置相对于所述车舱坐标系的平移矩阵和旋转矩阵;
将所述平移矩阵和所述旋转矩阵组合后与所述第一变换矩阵相乘,得到所述第二变换矩阵。
本公开实施例中,可根据第三位置信息确定基准位置相对于车舱坐标系的平移矩阵,示例性的,平移矩阵即为由X2、Y2和Z2组成的矩阵。此外,根据第三位置信息和旋转角信息确定旋转矩阵时,可进行三维空间建模,确定基准位置与车舱坐标系的车舱原点的连线,分别与车舱坐标系各坐标平面的夹角,得到该连线分别绕车舱坐标系各轴的主动旋转矩阵,基于连线绕各轴的主动旋转矩阵做乘积,从而得到本公开的旋转矩阵。
如前所述的第一变换矩阵包括当前位置相对于基准位置的平移和旋转信息,本公开在获取到基准位置相对于车舱坐标系的平移矩阵以及旋转矩阵后,即可将平移矩阵与旋转矩阵组合,例如,平移矩阵是B,旋转矩阵是D,则可组合后获得基准位置相对于车舱坐标系的包括平移和旋转信息的矩阵[B,D],将[B, D]和当前位置相对于基准位置的第一旋转矩阵M相乘后,即可得到第二变换矩阵,例如该第二变换矩阵为E,E为[B,D]*M。
图7为本公开实施例中一种车载相机的标定方法原理图,如图7所示,本公开的相机标定方法包括两个步骤,第一步是将71标识的车载相机实时位置向72标识的车载相机基准位置做映射,在映射过程中,可利用方向盘管柱机械伸缩以及旋转信号,还可利用车载相机拍摄的车舱内部照片。其中,方向盘管柱机械伸缩信号包括本公开提及的方向盘管柱的伸缩长度数据,旋转信号包括本公开提及的偏转角数据;车载相机拍摄的车舱内部照片,包括本公开提及的的第一图像和第二图像,通过上述映射即获得本公开的第一变换矩阵。第二步是将72标识的车载相机基准位置向73标识的乘用车车舱空间坐标系做映射,在映射过程中可利用车舱机械设计数模,例如可利用本公开中提及的方向盘管柱的旋转中心点在车舱坐标系下的第四位置信息,以及基准位置相对于方向盘管柱的旋转中心点的第五位置信息,确定基准位置在车舱坐标系下的第三位置信息,并基于基准位置下的旋转角信息,确定本公开的平移矩阵和旋转矩阵。
基于上述两步获得的第一变换矩阵、平移矩阵以及旋转矩阵,即可得到本公开实施例中车载相机的实时位置相对于车舱空间坐标系的第二变换矩阵。需要说明的是,由于矩阵变换是可逆的,因此若事先获得了车载相机相对于车舱空间坐标系的位置,即可基于该第二变换矩阵,获得车载相机的实时位置,也即方向盘管柱的实时位置。
图8示出了本公开实施例提供的一种相机标定装置示例图,由图8可知,相机标定装置800包括:
第一获取部分801,被配置为获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
第一确定部分802,被配置为基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
第二获取部分803,被配置为获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
第二确定部分804,被配置为根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
在一些实施例中,所述第一获取部分801,被配置为读取所述方向盘管柱的机械信号,得到所述方向盘管柱的伸缩长度数据和偏转角数据,将所述方向盘管柱的伸缩长度数据和偏转角数据作为所述第一位置信息。
在一些实施例中,所述第一确定部分802,被配置为基于所述伸缩长度数据、所述偏转角数据以及预设角度进行刚体变换,得到所述第一变换矩阵;其中,所述预设角度为所述车载相机的镜头面相对于所述方向盘管柱轴线的角度。
在一些实施例中,所述装置还包括:
第三获取部分,被配置为获取所述车载相机在当前位置采集的第一图像,以及所述车载相机在所述基准位置采集的第二图像;
第三确定部分,被配置为在基于所述车载相机第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵之后,对所述第一图像和所述第二图像进行特征元素识别和匹配;
第四确定部分,被配置为根据所述第一图像和所述第二图像中的特征元素的匹配结果,确定所述第一图像与所述第二图像的映射关系矩阵;
更新部分,被配置为基于所述映射关系矩阵更新所述第一变换矩阵。
在一些实施例中,所述第一获取部分801,被配置为分别对所述第一图像和所述第二图像进行目标检测,确定目标对象在所述第一图像中的位置和在所述第二图像中的位置;确定所述第一位置信息为所述目标对象在所述第一图像中的位置相对于在所述第二图像中的位置的偏差;
所述第一确定部分802,被配置为基于所述偏差确定所述第一变换矩阵。
在一些实施例中,所述第二获取部分802,被配置为获取所述方向盘管柱的旋转中心点在所述车舱坐标系下的第四位置信息;获取所述基准位置相对于所述方向盘管柱的旋转中心点的第五位置信息;根据所述第四位置信息和所述第五位置信息确定所述基准位置在车舱坐标系下的第三位置信息。
在一些实施例中,所述第二确定部分804,被配置为根据所述第三位置信息和所述旋转角信息,确定所述基准位置相对于所述车舱坐标系的平移矩阵和旋转矩阵;将所述平移矩阵和所述旋转矩阵组合后与所述第一变换矩阵相乘,得到所述第二变换矩阵。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有与方法实施例相似的有益效果。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
需要说明的是,本公开实施例中,如果以软件功能模块的形式实现上述的车载相机的标定方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本公开实施例不限制于任何特定的硬件和软件结合。
对应地,本公开实施例提供一种计算机设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述方法中的步骤。
对应地,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述方法中的步骤。所述计算机可读存储介质可以是保持和存储由指令执行设备使用的指令的有形设备,可以是易失性存储介质或非易失性存储介质。计算机可读存储介质例如可以是但不限于:电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Erasable  Programmable Read Only Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disk Read Only Memory,CD-ROM)、数字多功能盘(Digital versatile Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
对应地,本公开实施例提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序被计算机读取并执行时,实现上述方法中的部分或全部步骤。该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
这里需要指出的是:以上存储介质、计算机程序产品和设备实施例的描述,与上述方法实施例的描述是类似的,具有与方法实施例相似的有益效果。对于本公开存储介质、计算机程序产品和设备实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块,也可以是非模块化的。
需要说明的是,图9为本公开实施例中计算机设备的一种硬件实体示意图,如图9所示,该计算机设备900的硬件实体包括:处理器901、通信接口902和存储器903,其中:
处理器901通常控制计算机设备900的总体操作。
通信接口902可以使计算机设备通过网络与其他终端或服务器通信。
存储器903配置为存储由处理器901可执行的指令和应用,还可以缓存待处理器901以及计算机设备900中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。处理器901、通信接口902和存储器903之间可以通过总线904进行数据传输。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本公开的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本公开的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本公开实施例的实施过程构成任何限定。上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本公开所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为 单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本公开上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本公开各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种车载相机的标定方法,所述方法包括:
    获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
    基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
    获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
    根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
  2. 根据权利要求1所述的方法,其中,所述获取车载相机的当前位置相对于基准位置的第一位置信息,包括:
    读取所述方向盘管柱的机械信号,得到所述方向盘管柱的伸缩长度数据和偏转角数据,将所述方向盘管柱的伸缩长度数据和偏转角数据作为所述第一位置信息。
  3. 根据权利要求2所述的方法,其中,所述基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵,包括:
    基于所述伸缩长度数据、所述偏转角数据以及预设角度进行刚体变换,得到所述第一变换矩阵;其中,所述预设角度为所述车载相机的镜头面相对于所述方向盘管柱轴线的角度。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    获取所述车载相机在当前位置采集的第一图像,以及所述车载相机在所述基准位置采集的第二图像;
    在基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵之后,对所述第一图像和所述第二图像进行特征元素识别和匹配;
    根据所述第一图像和所述第二图像中的特征元素的匹配结果,确定所述第 一图像与所述第二图像的映射关系矩阵;
    基于所述映射关系矩阵更新所述第一变换矩阵。
  5. 根据权利要求4所述的方法,其中,所述获取车载相机的当前位置相对于基准位置的第一位置信息,包括:
    分别对所述第一图像和所述第二图像进行目标检测,确定目标对象在所述第一图像中的位置和在所述第二图像中的位置;
    确定所述第一位置信息为所述目标对象在所述第一图像中的位置相对于在所述第二图像中的位置的偏差;
    所述基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵,包括:
    基于所述偏差确定所述第一变换矩阵。
  6. 根据权利要求1至5任一项所述的方法,其中,所述获取所述基准位置在车舱坐标系下的第三位置信息,包括:
    获取所述方向盘管柱的旋转中心点在所述车舱坐标系下的第四位置信息;
    获取所述基准位置相对于所述方向盘管柱的旋转中心点的第五位置信息;
    根据所述第四位置信息和所述第五位置信息,确定所述基准位置在所述车舱坐标系下的第三位置信息。
  7. 根据权利要求1至6中任一项所述的方法,其中,所述根据所述第一变换矩阵、所述第三位置信息以及所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵,包括:
    根据所述第三位置信息和所述旋转角信息,确定所述基准位置相对于所述车舱坐标系的平移矩阵和旋转矩阵;
    将所述平移矩阵和所述旋转矩阵组合后与所述第一变换矩阵相乘,得到所述第二变换矩阵。
  8. 一种车载相机的标定装置,所述装置包括:
    第一获取部分,被配置为获取车载相机的当前位置相对于基准位置的第一位置信息;其中,所述车载相机设置于方向盘管柱上;
    第一确定部分,被配置为基于所述车载相机的第一位置信息,确定所述当前位置相对于所述基准位置的第一变换矩阵;
    第二获取部分,被配置为获取所述基准位置在车舱坐标系下的第三位置信息和所述基准位置下所述方向盘管柱的旋转角信息;
    第二确定部分,被配置为根据所述第一变换矩阵、所述第三位置信息和所述旋转角信息,确定所述车载相机的当前位置相对于所述车舱坐标系的第二变换矩阵。
  9. 一种计算机设备,包括:处理器;用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行如权利要求1至7中任一项所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法。
  11. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行如权利要求1至7中任一项所述的方法。
PCT/CN2023/090821 2022-05-31 2023-04-26 一种车载相机的标定方法及装置、计算机设备、存储介质和产品 WO2023231653A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210616046.0A CN114897996A (zh) 2022-05-31 2022-05-31 一种车载相机的标定方法及装置、计算机设备、存储介质
CN202210616046.0 2022-05-31

Publications (1)

Publication Number Publication Date
WO2023231653A1 true WO2023231653A1 (zh) 2023-12-07

Family

ID=82725226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/090821 WO2023231653A1 (zh) 2022-05-31 2023-04-26 一种车载相机的标定方法及装置、计算机设备、存储介质和产品

Country Status (2)

Country Link
CN (1) CN114897996A (zh)
WO (1) WO2023231653A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114897996A (zh) * 2022-05-31 2022-08-12 上海商汤临港智能科技有限公司 一种车载相机的标定方法及装置、计算机设备、存储介质
CN117437288B (zh) * 2023-12-19 2024-05-03 先临三维科技股份有限公司 摄影测量方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110128388A1 (en) * 2009-12-01 2011-06-02 Industrial Technology Research Institute Camera calibration system and coordinate data generation system and method thereof
CN112686958A (zh) * 2019-10-18 2021-04-20 上海商汤智能科技有限公司 标定方法、装置及电子设备
US20210312665A1 (en) * 2020-12-24 2021-10-07 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Image projection method, apparatus, device and storage medium
CN114332242A (zh) * 2021-12-29 2022-04-12 黑芝麻智能科技有限公司 相机视野外物体坐标系标定方法、装置、设备和存储介质
CN114897996A (zh) * 2022-05-31 2022-08-12 上海商汤临港智能科技有限公司 一种车载相机的标定方法及装置、计算机设备、存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110128388A1 (en) * 2009-12-01 2011-06-02 Industrial Technology Research Institute Camera calibration system and coordinate data generation system and method thereof
CN112686958A (zh) * 2019-10-18 2021-04-20 上海商汤智能科技有限公司 标定方法、装置及电子设备
US20210312665A1 (en) * 2020-12-24 2021-10-07 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Image projection method, apparatus, device and storage medium
CN114332242A (zh) * 2021-12-29 2022-04-12 黑芝麻智能科技有限公司 相机视野外物体坐标系标定方法、装置、设备和存储介质
CN114897996A (zh) * 2022-05-31 2022-08-12 上海商汤临港智能科技有限公司 一种车载相机的标定方法及装置、计算机设备、存储介质

Also Published As

Publication number Publication date
CN114897996A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
WO2023231653A1 (zh) 一种车载相机的标定方法及装置、计算机设备、存储介质和产品
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
US10990099B2 (en) Motion planning methods and systems for autonomous vehicle
US11531110B2 (en) LiDAR localization using 3D CNN network for solution inference in autonomous driving vehicles
US11594011B2 (en) Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles
EP3714285B1 (en) Lidar localization using rnn and lstm for temporal smoothness in autonomous driving vehicles
WO2021217572A1 (zh) 车内用户定位方法、车载交互方法、车载装置及车辆
CN110341617B (zh) 眼球追踪方法、装置、车辆和存储介质
US11995536B2 (en) Learning device, estimating device, estimating system, learning method, estimating method, and storage medium to estimate a state of vehicle-occupant with respect to vehicle equipment
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
WO2020133172A1 (zh) 图像处理方法、设备及计算机可读存储介质
CN110901656B (zh) 用于自动驾驶车辆控制的实验设计方法和***
CN110217189A (zh) 车辆驾驶环境调节的方法、***、设备以及介质
JP2022176081A (ja) 適応視標追跡機械学習モデル・エンジン
CN115525152A (zh) 图像处理方法及***、装置、电子设备和存储介质
CN116061819A (zh) 车载屏幕的机械臂控制方法、装置、设备和车辆
CN112158143B (zh) 一种车辆的控制方法、控制***及车辆
CN113362370B (zh) 目标对象的运动信息确定方法、装置、介质及终端
CN110827337A (zh) 确定车载相机的姿态的方法、装置和电子设备
KR20230031550A (ko) 카메라 자세 결정 및 그 방법을 수행하는 전자 장치
JP2022042386A (ja) 画像処理装置
WO2021243693A1 (zh) 驾驶员图像采集的方法和装置
KR102582056B1 (ko) 서라운드 뷰 제공 장치 및 이의 동작 방법
WO2022094787A1 (zh) 驾驶员数据处理***及采集驾驶员数据的方法
CN117149329A (zh) 显示屏干扰区域的处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23814839

Country of ref document: EP

Kind code of ref document: A1