WO2021228058A1 - 成像方法、成像装置、光学成像***及车辆 - Google Patents

成像方法、成像装置、光学成像***及车辆 Download PDF

Info

Publication number
WO2021228058A1
WO2021228058A1 PCT/CN2021/092925 CN2021092925W WO2021228058A1 WO 2021228058 A1 WO2021228058 A1 WO 2021228058A1 CN 2021092925 W CN2021092925 W CN 2021092925W WO 2021228058 A1 WO2021228058 A1 WO 2021228058A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parameter
screen
modulation
parameters
Prior art date
Application number
PCT/CN2021/092925
Other languages
English (en)
French (fr)
Inventor
刘欣
赵晗
陈纾悦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21802918.9A priority Critical patent/EP4142275A4/en
Publication of WO2021228058A1 publication Critical patent/WO2021228058A1/zh
Priority to US17/986,483 priority patent/US20230085082A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/205Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B2027/0192Supplementary details
    • G02B2027/0196Supplementary details having transparent supporting structure for display mounting, e.g. to a window or a windshield
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This application relates to the field of image processing, and more specifically, to imaging methods, imaging devices, optical imaging systems, and vehicles.
  • an optical imaging system that processes an image to be imaged in the electrical domain by a device such as an image processing unit to generate an electrical signal, processes the electrical signal through a light modulator to generate a beam of the image, and passes through one or A plurality of lenses are emitted, and the light beam is reflected by the screen and enters the human eye, so that the human eye can observe the image of the image.
  • HUD Heads Up Display
  • AR Augmented Reality
  • the screen for example, the above-mentioned windshield glass
  • the optical imaging system are arranged off-axis, the image reflected by the screen to the human eye will be distorted, which greatly limits the display quality of the image.
  • the prior art needs to perform compensation processing on each image. For example, when the image pixels are high or the video frame rate is high, the time delay of the compensation processing is greatly increased, and the imaging time delay is increased. The performance requirements of image processing units and the like are relatively high, which increases the processing cost.
  • the present application provides an imaging method, an imaging device, an optical imaging system, and a vehicle, which can reduce imaging time delay and reduce processing costs.
  • an imaging method which is characterized in that it is applied to an optical imaging system, the system includes a spatial light modulator and at least one lens, and the spatial light modulator is used to modulate the electrical signal of the image to generate The imaging light beam of the image, the imaging light beam enters the human eye through the lens and the screen, wherein the screen is configured off-axis relative to the optical imaging system, and the method includes:
  • the first modulation parameter is determined according to a first distortion image and a first target image, the first distortion image is presented by the imaging processing of the optical imaging system and the screen
  • the image of the first target is the image of the training image, and the distortion of the first target image is within a preset range; the spatial light modulator is controlled to be imaged based on the first modulation parameter
  • the electrical signal of the first image is modulated.
  • the first distortion image is an image rendered by the electrical signal of the training image after being modulated based on the original modulation parameters and the reflection of the screen.
  • the first target image is an image that is expected to be observed by human eyes when the light beam of the image passes through the screen for imaging.
  • the image of the first target image is determined according to the image of the first distorted image.
  • the image of the first target image includes the degree of distortion in the image of the first distorted image.
  • the solution of the present application by determining the first modulation parameter of the spatial light modulator based on the distorted image and the expected image, and modulating the electrical signal of the image to be presented according to the first modulation parameter, the When the image of the image to be presented is distorted, it avoids the large imaging time delay caused by the compensation processing of the image in the electrical domain, and can reduce the processing cost increased due to the compensation processing of the image in the electrical domain .
  • the screen is arranged off-axis relative to the optical imaging system (for example, the at least one lens).
  • the solution according to the present application can be applied to a scene where the screen is configured off-axis relative to the optical imaging system.
  • the method is executed by a processor in an optical imaging system.
  • the method is performed by a spatial light modulator in an optical imaging system.
  • the method is executed by the control module in the spatial light modulator.
  • the obtaining the first modulation parameter includes: controlling the spatial light modulator to modulate the electrical signal of the training image based on the original modulation parameter to obtain the first distorted image; adjusting the original modulation parameter , So that the deviation between the first distorted image and the first target image is within a preset range; the adjusted original modulation parameter is determined as the first modulation parameter.
  • the obtaining the first parameter includes: sending an image of the first distorted image to a server; and obtaining the first modulation parameter from the server.
  • the method further includes sending the training image to the server.
  • the equipment burden caused by determining the first parameter can be reduced, and the imaging delay caused by the online determination of the first modulation parameter can be reduced.
  • the method further includes: acquiring a first correspondence relationship between K modulation parameters including the first modulation parameter and K image parameters, where the kth modulation parameter is based on having the kth image The distortion image of the training image of the parameter and the target image are determined, the kth modulation parameter corresponds to the kth image parameter, K ⁇ 2, k ⁇ [1,K]; and the first modulation parameter is acquired
  • the method includes: determining a modulation parameter corresponding to an image parameter of the first image as the first modulation parameter according to the first correspondence relationship
  • any two of the K modulation parameters have different values
  • any two of the K image parameters have different values
  • the method further includes: acquiring a first correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of image parameters, wherein each modulation parameter corresponds to the modulation parameter.
  • the distortion image of the training image of the image parameter and the target image are determined; and the acquiring the first modulation parameter includes: determining the modulation parameter corresponding to the image parameter of the first image as the The first modulation parameter.
  • the modulation parameters include Zernike coefficients
  • different values of any two of the K modulation parameters can be understood as different values of Zernike coefficients.
  • each modulation parameter is a parameter group including multiple parameters
  • the different values of any two modulation parameters in the K modulation parameters can be understood as the difference of at least one of the parameters included in the two modulation parameter groups The value is different.
  • the value of the image size in any two of the K image parameters is different.
  • each image parameter is a parameter group including multiple parameters
  • the different values of any two image parameters in the K image parameters can be understood as the difference of at least one of the parameters included in the two image parameter groups The value is different.
  • this application can flexibly respond to the imaging requirements of images with different image parameters, which further improves the practicability of this application.
  • the image parameters include at least one of the following parameters: image size, image color, image shape, and image resolution.
  • the method further includes: obtaining a second correspondence between M modulation parameters including the first modulation parameter and M position parameters, where the position parameters are used to indicate the relationship between the human eye and the screen
  • the position relationship between the m-th modulation parameter is determined according to the distortion image and the target image of the training image under the m-th position parameter, and the m-th modulation parameter corresponds to the m-th position parameter,
  • the values of any two of the M modulation parameters are different, and the values of any two of the M position parameters are different
  • the acquiring the first modulation parameter includes: according to the second correspondence Relationship, the modulation parameter corresponding to the position parameter of the first image is determined as the first modulation parameter
  • the method further includes: acquiring a second correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of position parameters, where the position parameter is used to indicate the relationship between the human eye and the position when observing the image of the image.
  • the relative position between the screens, wherein each modulation parameter is determined according to the distortion image and the target image of the training image with the position parameter corresponding to the modulation parameter; and the acquiring the first modulation parameter includes: In the second correspondence, the modulation parameter corresponding to the position parameter of the first image is determined as the first modulation parameter.
  • the present application can flexibly respond to the image requirements observed at different positions, and further improve the practicability of the present application.
  • the position parameter includes at least one of the following parameters: the distance between the human eye and the point of incidence of the light beam on the screen, the position of the human eye projected on the screen in the horizontal direction of the screen, and the human eye The position of the projection on the screen in the vertical direction of the screen.
  • the position parameter includes the distance between the human eye and the incident point of the light beam on the screen
  • the difference between the human eye and the incident point of the light beam on the screen in any two of the M position parameters The value of the distance is different.
  • each positional parameter is a parameter group that includes multiple parameters
  • the different values of any two positional parameters in the M positional parameters can be understood as the difference of at least one of the parameters included in the two positional parameter groups The value is different.
  • the method further includes: acquiring a third correspondence between N modulation parameters including the first modulation parameter and N screen parameters, wherein the nth modulation parameter is based on the training image The distortion image of the screen imaging with the nth screen parameter and the target image are determined, the nth modulation parameter corresponds to the mth screen parameter, and the value of any two modulation parameters of the N modulation parameters Different, the values of any two screen parameters of the N screen parameters are different; and the obtaining the first modulation parameter includes: according to the third correspondence relationship, the modulation parameter corresponding to the screen parameter of the first screen , Determined as the first modulation parameter, wherein the first screen is a screen used for imaging of the first image
  • the method further includes: acquiring a third correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of screen parameters, wherein each modulation parameter is based on the corresponding modulation parameter.
  • the distortion image of the screen imaging of the screen parameters and the target image are determined; and the acquiring the first modulation parameter includes: determining the modulation parameter corresponding to the screen parameter of the first screen according to the third correspondence relationship as the In the first modulation parameter, the first screen is a screen used for imaging of the first image.
  • this application can flexibly use screens with different screen parameters in a scene, which further improves the practicability of this application.
  • the screen parameters include at least one of the following parameters: screen shape, screen thickness, screen material, screen refractive index, and screen color.
  • the screen parameters include the screen shape
  • the screen shapes (or the index values corresponding to the screen shapes) in any two position parameters of the N screen parameters are different.
  • each screen parameter is a parameter group including multiple parameters
  • the different values of any two screen parameters in the N screen parameters can be understood as the difference between at least one of the parameters included in the two screen parameter groups The value is different.
  • the optical imaging system is configured in a vehicle, and the screen includes a windshield glass of the vehicle.
  • the optical imaging system is configured in vehicles with various devices that can be used as screens such as windshields, for example, trains, airplanes, or ships.
  • the screen can be a device with reflection or refraction functions such as a car window.
  • the first modulation parameter includes Zernike coefficients.
  • an imaging device including a processor coupled with a memory, the memory is used to store a computer program or instruction, and the processor is used to execute the computer program or instruction in the memory, so that The processor obtains a first modulation parameter, the first modulation parameter being determined according to a first distortion image and a first target image, and the first distortion image is a light beam produced by modulating the training image by a spatial light modulator
  • the image reflected by at least one lens and the screen, the first target image is the image of the training image, and the distortion of the first target image is within a preset range, and is used to control the spatial light modulator
  • the electrical signal of the first image to be imaged is modulated based on the first modulation parameter.
  • the first distortion image is an image rendered by the electrical signal of the training image after being modulated based on the original modulation parameters and the reflection of the screen.
  • the solution of the present application by determining the first modulation parameter of the spatial light modulator based on the distorted image and the expected image, and modulating the electrical signal of the image to be presented according to the first modulation parameter, the When the image of the image to be presented is distorted, it avoids the large imaging time delay caused by the compensation processing of the image in the electrical domain, and can reduce the processing cost increased due to the compensation processing of the image in the electrical domain .
  • the screen is arranged off-axis relative to the optical imaging system (for example, the at least one lens).
  • the solution according to the present application can be applied to a scene where the screen is configured off-axis relative to the optical imaging system.
  • the processor and the spatial light modulator in the Yuxue imaging system are independently configured.
  • the processor is configured in a spatial light modulator.
  • the processor is further configured to control the spatial light modulator to modulate the electrical signal of the training image based on original modulation parameters to obtain the first distorted image; adjust the original modulation parameters to The deviation between the first distorted image and the first target image is within a preset range; and the adjusted original modulation parameter is determined as the first modulation parameter.
  • the device further includes: a transceiver, configured to send the training image and the image of the first distorted image to a server, and to acquire the first modulation parameter from the server.
  • a transceiver configured to send the training image and the image of the first distorted image to a server, and to acquire the first modulation parameter from the server.
  • the equipment burden caused by determining the first parameter can be reduced, and the imaging delay caused by the online determination of the first modulation parameter can be reduced.
  • the processor is further configured to obtain a first correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of image parameters, wherein each modulation parameter is based on having the modulation parameter
  • the distortion image of the training image and the target image of the corresponding image parameter are determined, and the modulation parameter corresponding to the image parameter of the first image is determined as the first modulation parameter according to the first correspondence relationship.
  • the first correspondence between K modulation parameters and K image parameters including the first modulation parameter is acquired, where the kth modulation parameter is based on the distortion image of the training image with the kth image parameter.
  • the kth modulation parameter corresponds to the kth image parameter, K ⁇ 2, k ⁇ [1,K], the value of any two modulation parameters in the K modulation parameters Different, the values of any two image parameters in the K image parameters are different.
  • this application can flexibly respond to the imaging requirements of images with different image parameters, which further improves the practicability of this application.
  • the image parameters include at least one of the following parameters: image size, image color, image shape, and image resolution.
  • the processor is further configured to obtain a second correspondence between a plurality of modulation parameters including the first modulation parameter and a plurality of position parameters, and the position parameter is used to indicate the image when the image is observed.
  • the processor is further configured to obtain a second correspondence between M modulation parameters including the first modulation parameter and M position parameters, and the position parameters are used to indicate the relationship between the human eye and the screen.
  • the position relationship between the m-th modulation parameter is determined according to the distortion image and the target image of the training image under the m-th position parameter, and the m-th modulation parameter corresponds to the m-th position parameter, Any two of the M modulation parameters have different values, and any two of the M position parameters have different values.
  • the present application can flexibly respond to the image requirements observed at different positions, and further improve the practicability of the present application.
  • the position parameter includes at least one of the following parameters: the distance between the human eye and the point of incidence of the light beam on the screen, the position of the human eye projected on the screen in the horizontal direction of the screen, and the human eye The position of the projection on the screen in the vertical direction of the screen.
  • the processor is further configured to obtain a third correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of screen parameters, wherein each modulation parameter is based on the modulation
  • the distortion image of the screen imaging of the screen parameter corresponding to the parameter and the target image are determined; and according to the third correspondence relationship, the modulation parameter corresponding to the screen parameter of the first screen is determined as the first modulation parameter, wherein ,
  • the first screen is a screen used for imaging of the first image.
  • the processor is further configured to obtain a third correspondence between N modulation parameters including the first modulation parameter and N screen parameters, where the nth modulation parameter is based on the training image
  • the distortion image of the screen imaging with the nth screen parameter and the target image are determined, the nth modulation parameter corresponds to the mth screen parameter, and the value of any two modulation parameters of the N modulation parameters Different, the values of any two screen parameters in the N screen parameters are different.
  • this application can flexibly use screens with different screen parameters in a scene, which further improves the practicability of this application.
  • the screen parameters include at least one of the following parameters: screen shape, screen thickness, screen material, screen refractive index, and screen color.
  • the first modulation parameter includes Zernike coefficients.
  • an optical imaging system including the processor of the second aspect and various implementations thereof, a spatial light modulator, and at least one lens.
  • an optical imaging system including: modulating an electrical signal of a first image to be imaged based on a first modulation parameter to generate an imaging beam of the first image, the first modulation parameter being based on The first distortion image and the first target image are determined, the first distortion image is the image presented by the training image through the imaging processing of the spatial light modulator and the screen, and the first target image is the image of the training image And the distortion of the first target image is within a preset range; at least one lens is used to refract the imaging light beam of the first image.
  • the spatial light modulator is specifically configured to modulate the electrical signal of the training image based on original modulation parameters to obtain the first distorted image;
  • the optical imaging system further includes: a camera device for Shooting the first distorted image;
  • the spatial light modulator is also used to obtain the first distorted image from the imaging device, and adjust the original modulation parameters, so that the first distorted image and the first The deviation between a target image is within a preset range, and the adjusted original modulation parameter is determined as the first modulation parameter.
  • the optical imaging system further includes: a camera device for acquiring an image of the first distorted image; a transceiver, for sending the image of the first distorted image to a server, for sending the image of the first distorted image from the server Receiving the first modulation parameter.
  • the transceiver is further configured to send the training image to the server.
  • the spatial light modulator is configured to obtain a first correspondence relationship between K modulation parameters including the first modulation parameter and K image parameters, where the kth modulation parameter is based on having the kth modulation parameter
  • the distortion image of the training image of the image parameter and the target image are determined, the kth modulation parameter corresponds to the kth image parameter, K ⁇ 2, k ⁇ [1,K], among the K modulation parameters
  • the values of any two modulation parameters of the K image parameters are different, the values of any two image parameters of the K image parameters are different, and the modulation parameters of the image parameters of the first image correspond to the modulation parameters according to the first correspondence relationship, Determined as the first modulation parameter.
  • the spatial light modulator is used to obtain a first correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of image parameters, wherein each modulation parameter is based on the corresponding modulation parameter.
  • the distortion image of the training image of the image parameter and the target image are determined, and the modulation parameter corresponding to the image parameter of the first image is determined as the first modulation parameter according to the first correspondence relationship.
  • the image parameters include at least one of the following parameters: image size, image color, image shape, and image resolution.
  • the spatial light modulator is used to obtain a second correspondence between a plurality of modulation parameters including the first modulation parameter and a plurality of position parameters, and the position parameter is used to indicate the image when the image is observed.
  • the modulation parameter corresponding to the position parameter of the first image is determined as the first modulation parameter.
  • the spatial light modulator is used to obtain a second correspondence between M modulation parameters including the first modulation parameter and M position parameters, and the position parameters are used to indicate the relationship between the human eye and the screen.
  • the position relationship between the m-th modulation parameter is determined according to the distortion image and the target image of the training image under the m-th position parameter, and the m-th modulation parameter corresponds to the m-th position parameter,
  • the values of any two of the M modulation parameters are different, and the values of any two of the M position parameters are different, and according to the second correspondence, the value of the first image
  • the modulation parameter corresponding to the position parameter is determined as the first modulation parameter
  • the distance between the human eye and the point of incidence of the light beam on the screen the position of the human eye on the screen in the horizontal direction of the screen, and the projection of the human eye on the screen in the vertical direction of the screen.
  • the position in the direction is the distance between the human eye and the point of incidence of the light beam on the screen, the position of the human eye on the screen in the horizontal direction of the screen, and the projection of the human eye on the screen in the vertical direction of the screen.
  • the spatial light modulator is configured to obtain a first correspondence relationship between a plurality of modulation parameters including the first modulation parameter and a plurality of screen parameters, wherein each modulation parameter is based on the The distortion image of the screen imaging of the screen parameter corresponding to the parameter and the target image are determined, and the modulation parameter corresponding to the screen parameter of the first screen is determined as the first modulation parameter according to the third correspondence relationship, wherein ,
  • the first screen is a screen used for imaging of the first image.
  • the spatial light modulator is used to obtain a third correspondence between N modulation parameters including the first modulation parameter and N screen parameters, where the nth modulation parameter is based on the training image
  • the distortion image of the screen imaging with the nth screen parameter and the target image are determined, the nth modulation parameter corresponds to the mth screen parameter, and the value of any two modulation parameters of the N modulation parameters Different, any two of the N screen parameters have different values, and according to the third correspondence, the modulation parameter corresponding to the screen parameter of the first screen is determined as the first modulation parameter ,
  • the first screen is a screen used for imaging of the first image.
  • the screen parameters include at least one of the following parameters: screen shape, screen thickness, screen material, screen refractive index, and screen color.
  • the first modulation parameter includes Zernike coefficients.
  • a vehicle for example, a vehicle
  • the optical imaging system of the third aspect and various implementations thereof.
  • a vehicle for example, a vehicle
  • the optical imaging system of the fourth aspect and various implementations thereof.
  • a computer program product includes: a computer program (also called code, or instruction), which when the computer program is executed, causes a computer to execute the first aspect and its Any one of the possible implementation methods.
  • a computer program also called code, or instruction
  • a computer-readable medium stores a computer program (also called code, or instruction) when it runs on a computer, so that the computer executes the first aspect and its Any one of the possible implementation methods.
  • a computer program also called code, or instruction
  • a chip system including a memory and a processor, the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a communication device installed with the chip system executes the foregoing
  • the first aspect and the method in any one of its possible implementations.
  • the chip system may include an input circuit or interface for sending information or data, and an output circuit or interface for receiving information or data.
  • Fig. 1 is a schematic diagram of an example of the imaging process of the AR HUD system.
  • Fig. 2 is a schematic diagram of the optical imaging system of the present application.
  • Fig. 3 is a schematic interaction diagram of an example of the imaging method of the present application.
  • Fig. 4 is a structural diagram of an example of the neural network device of the present application.
  • Fig. 5 is a schematic diagram of an example of the hierarchy of the neural network of the present application.
  • Fig. 6 is a schematic interaction diagram of another example of the imaging method of the present application.
  • FIG. 7 is a schematic diagram of an example of the imaging device of the present application.
  • the solution of the present application may be applied to an optical imaging system, and the optical imaging system may include, but is not limited to, a head-up display HUD system or an AR HUD system.
  • HUD can also be called windshield instrument display or head-up display, that is, important information can be mapped on the holographic half mirror on the windshield through the optical imaging system, so that the driver can see clearly without having to lower his head. Important information.
  • Augmented Reality (AR) technology is a technology that ingeniously integrates virtual information with the real world. It uses a variety of technical methods such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, and sensing. After the generated text, image, 3D model, music, video and other virtual information are simulated and simulated, they are applied to the real world, and the two kinds of information complement each other, thus realizing the "enhancement" of the real world.
  • AR HUD can be understood as the integration of AR technology and HUD technology, that is, a reasonable and vivid display of some driving information in the driver's line of sight area, and combined with the actual traffic conditions.
  • Fig. 1 shows a schematic diagram of the imaging process of the present application.
  • the beam of the image to be imaged from the optical imaging system is reflected (or refracted) by the windshield glass (or windshield) and enters the human eye, so that the human eye can observe the image.
  • the image of the image for example, when the human eye and the optical imaging system are located on the same side of the windshield glass, the image is a virtual image.
  • the image is a real image.
  • the windshield glass is arranged off-axis relative to the optical imaging system (or the lens of the optical imaging system), so the image of the image observed by the human eye will be distorted.
  • Fig. 2 shows a schematic diagram of an example of the optical imaging system of the present application.
  • the optical imaging system 100 includes:
  • At least one lens 120 is
  • the processor 130 The processor 130.
  • SLM Spatial Light Modulator
  • SLM can change the amplitude or intensity, phase, polarization state, and wavelength of the spatial light distribution under the control of an electric drive signal that changes with time or other signals, or convert incoherent light into coherent light.
  • SLM can modulate a certain parameter of the light field through liquid crystal molecules, such as modulating the amplitude of the light field, modulating the phase through the refractive index, modulating the polarization state through the rotation of the polarization plane, or realizing incoherent to coherent light. Conversion to write certain information into the light wave to achieve the purpose of light wave modulation. It can easily load information into a one-dimensional or two-dimensional light field, and use the wide bandwidth of light, multi-channel parallel processing and other advantages to quickly process the loaded information.
  • the SLM can include multiple independent units, which are arranged in a one-dimensional or two-dimensional array in space. Each unit can independently receive the control of an optical signal or an electrical signal, and change its own optical properties according to this signal, so as to improve the illumination The light waves on it are modulated.
  • the spatial light modulator 110 may obtain the image data to be imaged from a processing unit in the electrical domain, for example, a graphics processing unit (GPU) or a central processing unit (CPU) (or Say, electrical signal), and modulate the data onto the light beam, thereby forming the light beam of the image.
  • a processing unit in the electrical domain for example, a graphics processing unit (GPU) or a central processing unit (CPU) (or Say, electrical signal)
  • GPU graphics processing unit
  • CPU central processing unit
  • the light beam After being refracted by the lens 120, the light beam is emitted from the spatial light modulator 110.
  • some modulation parameters of the spatial light modulator 110 can be adjusted, so that the parameters of the image formed by the light beam emitted from the spatial light modulator 110, such as the shape and size of the image, can be adjusted.
  • the modulation parameter may include but is not limited to Zernike coefficient, or in other words, the coefficient in Zernike Polynomials.
  • aberrations refer to imaging defects in the optical system.
  • Geometrical optics divides aberrations (geometric aberrations) into monochromatic light aberrations and chromatic light aberrations.
  • Monochromatic light aberrations include spherical aberration, coma, astigmatism, curvature of field and distortion, and chromatic aberrations include positional chromatic aberration and Chromatic aberration of magnification; and in physical optics, the aberration is called wavefront aberration or wavefront aberration, which is the distance between the waveform formed by the spherical wave emitted by the point light source and the ideal spherical wave after passing through the optical system.
  • the wavefront aberration can be expressed by the Zernike polynomial periodic table or geometric aberrations such as spherical aberration and coma. Distortion can be understood as the result of different prism image shifts of the surrounding points after the square object passes through the optical system.
  • the optical main reflector is a component of the spatial optical remote sensor, and its mirror surface accuracy is one of the important factors affecting the resolution of the spatial optical remote sensor.
  • the mirror surface of the reflector will be deformed due to the action of the gravitational field. Therefore, it is necessary to analyze the deformation of the mirror surface when designing the optical reflector.
  • Mirror deformation includes rigid body displacement and surface deformation. Rigid body displacement will cause the image of the optical system to tilt, off-axis and defocus, and surface deformation will affect the wavefront difference of the optical system.
  • Rigid body displacement can be eliminated by adjusting the relative position between optical elements, while surface deformation cannot be eliminated. Therefore, the surface deformation in the mirror surface deformation can truly reflect the surface shape accuracy of the optical mirror.
  • the surface shape data is obtained through finite element analysis, and the deformed surface shape is accurately fitted with Zernike polynomials, and the rigid body displacement part is separated to obtain the surface deformation cloud image, and the surface deformation of the surface deformation is calculated. The difference between the square root and the maximum and minimum surface deformation.
  • modulation parameters may also include parameters of the spatial light modulator for controlling the amplitude, intensity, phase, polarization state, and wavelength of the light beam.
  • the distortion of the image observed by the human eye can be reduced by adjusting the above-mentioned modulation parameters.
  • the modulation parameter may be a trained parameter that can reduce the distortion of the image observed by the human eye.
  • the processor 130 may obtain the modulation parameter, and control the spatial light modulator 110 to use the modulation parameter to be imaged.
  • the image is light modulated.
  • the structure of the optical imaging system 100 shown in FIG. 2 is only an exemplary illustration, and the present application is not limited thereto.
  • the processor 130 may be integrated in the spatial light modulator 110, or the processing
  • the function of the detector 130 may be realized by a device or module capable of realizing a calculation function in the spatial light modulator 110.
  • the optical imaging system 100 may further include a camera 140.
  • the imaging device 140 is used to capture a distorted image, and the distorted image is used to determine the modulation parameter, and the process will be described in detail later.
  • the arrangement position of the imaging device 140 is relative to the position of the human eye so that the captured distorted image is the same or approximately the same as the distorted image observed by the human eye.
  • the camera device 140 may be detachable, so that the camera device 140 can be detached after the distorted image is taken.
  • the optical imaging system 100 may also include other components included in the optical imaging system in the prior art. Here, in order to avoid redundant description, detailed descriptions thereof are omitted.
  • FIG. 3 shows a schematic diagram of an example of the determination process of the modulation parameter.
  • the spatial light modulator 110 may obtain image #A (ie, training image) from an electrical domain graphics processor such as a GPU.
  • image #A ie, training image
  • an electrical domain graphics processor such as a GPU.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #A (that is, an example of the original modulation parameter) to generate the beam of the image #A.
  • the beam # A that is, an example of the original modulation parameter
  • the modulation parameter #A may be a default parameter or a factory-set parameter, or the modulation parameter #A may also be a parameter configured by the processor 130 for the spatial light modulator 110.
  • the light beam #A passes through the refraction of the lens 120 and the reflection (or refraction) of the screen #A and then enters the imaging device 140, so that the imaging device 140 can capture the distorted image of the image #A, specifically, the distorted image.
  • the image below is marked as image #B in order to facilitate understanding and distinction.
  • the processor 130 obtains the image #B from the photographing device 140.
  • the processor 130 may determine the expected image of the image #A, specifically, the image of the expected image.
  • image #C the expected image of the image #A can be understood as the image of the image #A that is not distorted or the degree of distortion is within a preset range observed by the human eye.
  • the processor 130 may obtain the expected image of the image #A, or image #C, in the following manner.
  • the distortion mapping relationship of different observation positions in space is different, so it is necessary to calibrate the limited different observation positions accordingly, and then use the interpolation method to generate the distortion mapping relationship of any observation position in space.
  • we hope that the final image observed at any position is a fixed rectangular field of view in space, so we need to select the appropriate imaging area size for each observation position during calibration.
  • the maximum value of the abscissa of the leftmost column of the distorted lattice image (for example, image #B) is x_left_i
  • the minimum value of the abscissa of the rightmost column is x_right_i
  • the ordinate of the uppermost row is the largest
  • the value is y_up_i
  • the minimum value of the bottom ordinate is y_down_i.
  • the rectangular range Ri [x_left_i, x_right_i, y_up_i, y_down_i] is inscribed in the distorted lattice field of view.
  • the rectangular range Ri can be used as an expected image of image #A (ie, image #C).
  • Ri is a range determined by distorted lattice images taken from different observation positions, in order to find a common rectangular field of view range in space, different Ri need to be moved to the same observation position. Taking two observation positions i and j as an example, since they are on the same horizontal straight line, there is only a horizontal displacement ⁇ x_ij. If the position j is on the right side of the position i, ⁇ x_ij is a positive value, otherwise it is a negative value. Then the observation result Rj_i corresponding to Rj at the observation position i can be calculated by the small hole imaging model.
  • the specific calculation formula is as follows:
  • Rj_i [x_left_j+f/Z* ⁇ x_ij*ppi,x_right_j+f/Z* ⁇ x*_ij*ppi,y_up_j,y_down_j]
  • f is the focal length of the camera
  • Z is the distance from the imaging surface to the camera
  • ppi is the number of pixels per unit distance from the CCD surface of the camera.
  • the rectangular field of view range R*_j at any observation position j can be restored by the small hole imaging model:
  • R*_j [max ⁇ x_left_i,x_left_j+f/Z* ⁇ x_ij*ppi ⁇ -f/Z* ⁇ x_ij*ppi,min ⁇ x_right_i,x_right_j+f/Z* ⁇ x_ij*ppi ⁇ -f/Z* ⁇ x_ij*ppi ,max ⁇ y_up_i,y_up_j ⁇ ,min(y_down_i,y_down_j)].
  • d1/(d1+d2)
  • P_i and P_i+1 are respectively the distortion mapping table of two adjacent calibrated positions.
  • the processor 130 may train to obtain the modulation parameter #B based on the image #B and the image #C.
  • the processor 130 may determine whether the similarity between the image #B and the image #C satisfies a preset condition, for example, whether the position deviation of the pixel at the same position in the image #B and the image #C is less than or equal to the preset condition. The deviation value.
  • the modulation parameter #A can be determined as the modulation parameter #B.
  • the processor 130 may perform adjustment on the basis of the modulation parameter #A based on the prescribed adjustment direction and adjustment step length to obtain the modulation parameter #C.
  • the spatial light modulator 110 may modulate the electrical domain data of the image #A based on the modulation parameter #C to generate the light beam of the image #A.
  • the light beam #B it is referred to as light beam #B.
  • step c the light beam #A passes through the refraction of the lens 120 and the reflection (or refraction) of the screen and then enters the imaging device 140, so that the imaging device 140 can capture the distorted image of the image #A, specifically, the distortion The image of the image, in order to facilitate understanding and distinction, mark it as image #D below.
  • step d the processor 130 obtains the image #D from the photographing device 140.
  • the processor 130 may determine whether the similarity between the image #D and the image #C satisfies a preset condition, for example, whether the position deviation of the pixel at the same position in the image #D and the image #C is less than or equal to The preset deviation value.
  • the modulation parameter #C can be determined as the modulation parameter #B.
  • the modulation parameter #X can be determined as the modulation parameter #B.
  • the above-listed process of obtaining modulation parameter #B based on the image #B and image #C training is only an exemplary description, and the application is not limited thereto.
  • the image #B and image #C can also be obtained.
  • the output data of the neural network model is the modulation parameter #B.
  • a neural network can be composed of neural units.
  • a neural unit can refer to an arithmetic unit that takes x s and intercept 1 as inputs.
  • the output of the arithmetic unit can be:
  • s 1, 2,...n, n is a natural number greater than 1
  • W s is the weight of x s
  • b is the bias of the neural unit.
  • f is the activation functions of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal.
  • the output signal of the activation function can be used as the input of the next convolutional layer.
  • the activation function can be a sigmoid function.
  • a neural network is a network formed by connecting many of the above-mentioned single neural units together, that is, the output of one neural unit can be the input of another neural unit.
  • the input of each neural unit can be connected with the local receptive field of the previous layer to extract the characteristics of the local receptive field.
  • the local receptive field can be a region composed of several neural units.
  • Convolutional neural network is a deep neural network with convolutional structure.
  • the convolutional neural network contains a feature extractor composed of a convolutional layer and a sub-sampling layer.
  • the feature extractor can be regarded as a filter, and the convolution process can be regarded as convolution using a trainable filter and an input image or convolution feature map.
  • the convolutional layer refers to the neuron layer that performs convolution processing on the input signal in the convolutional neural network.
  • a neuron can be connected to only part of the neighboring neurons.
  • a convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units.
  • Neural units in the same feature plane share weights, and the shared weights here are the convolution kernels. Sharing weight can be understood as the way of extracting image information has nothing to do with location. The underlying principle is that the statistical information of a certain part of the image is the same as that of other parts. This means that the image information learned in one part can also be used in another part. Therefore, the image information obtained by the same learning can be used for all positions on the image. In the same convolution layer, multiple convolution kernels can be used to extract different image information. Generally, the more the number of convolution kernels, the richer the image information reflected by the convolution operation.
  • the convolution kernel can be initialized in the form of a matrix of random size. During the training process of the convolutional neural network, the convolution kernel can obtain reasonable weights through learning. In addition, the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, and at the same time reduce the risk of overfitting.
  • Convolutional neural networks can use backpropagation (BP) algorithms to modify the size of the parameters in the initial super-resolution model during the training process, so that the reconstruction error loss of the super-resolution model becomes smaller and smaller. Specifically, forwarding the input signal to the output will cause error loss, and the parameters in the initial super-resolution model are updated by backpropagating the error loss information, so that the error loss is converged.
  • the backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal super-resolution model parameters, such as a weight matrix.
  • an embodiment of the present application provides a system architecture 300.
  • the data acquisition device 360 for example, the camera 140 and the processor 130
  • the acquisition process of the training data may be similar to the acquisition process of the above-mentioned image #C and image #B , Or in other words, the image #C and image #B can be used as training data by themselves.
  • the data collection device 360 stores the training data in the database 330, and the training device 320 trains the neural network model 301 based on the training data maintained in the database 330, that is, the neural network model 301 corresponding to the modulation parameter #B.
  • the training data maintained in the database 330 may not all come from the collection of the data collection device 360, and may also be received from other devices.
  • the training device 320 does not necessarily perform the training of the neural network model 301 completely based on the training data maintained by the database 330. It may also obtain training data from the cloud or other places for model training. The above description should not be used as an implementation of this application. Limitations of examples.
  • the neural network model 301 trained according to the training device 320 can be applied to the execution device 310.
  • the execution device 310 is configured with an I/O interface 312 for data interaction with the client device 340 (for example, the processor 130 or the spatial light modulator 110), and the client device 340 inputs data to the I/O interface 312 (For example, the above-mentioned image #C and image #B).
  • the client device 340 for example, the processor 130 or the spatial light modulator 110
  • the client device 340 inputs data to the I/O interface 312 (For example, the above-mentioned image #C and image #B).
  • the preprocessing module 313 is configured to perform preprocessing according to the input data received by the I/O interface 312, wherein the process and method of the preprocessing may be similar to the prior art. Here, in order to avoid redundant description, detailed descriptions thereof are omitted. It should be noted that in this application, the input data may not be preprocessed. In this case, the system architecture 300 may not include the preprocessing module 313.
  • the calculation module 311 is configured to perform calculation and other related processing on the input data from the preprocessing module 313 or the I/O interface 312 according to the neural network model 301 described above.
  • the execution device 310 can call data, codes, etc. in the data storage system 350 for corresponding processing, and can also store data, instructions, etc. obtained by corresponding processing in the data storage system 350.
  • the I/O interface 312 returns the processing result, such as the modulation parameter #B obtained above, to the client device 340.
  • the user can manually set input data, and the manual setting can be operated through the interface provided by the I/O interface 312.
  • the client device 340 can automatically send input data to the I/O interface 312. If the client device 340 is required to automatically send the input data and the user's authorization is required, the user can set the corresponding authority in the client device 340.
  • the user can view the result output by the execution device 310 on the client device 340, and the specific presentation form may be a specific manner such as display, sound, and action.
  • the client device 340 can also be used as a data collection terminal to collect the input data of the input I/O interface 312 and the output result of the output I/O interface 312 as shown in FIG.
  • the I/O interface 312 directly uses the input data input to the I/O interface 312 and the output result of the output I/O interface 312 as shown in FIG. 4 as new The sample data is stored in the database 330.
  • FIG. 4 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship between the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the data The storage system 350 is an external memory relative to the execution device 310. In other cases, the data storage system 350 may also be placed in the execution device 310.
  • the neural network of this application may include, but is not limited to, the convolutional neural network CNN.
  • a convolutional neural network is a deep neural network with a convolutional structure. It is a deep learning architecture.
  • the deep learning architecture refers to Machine learning algorithms perform multiple levels of learning at different levels of abstraction.
  • CNN is a feed-forward artificial neural network. Each neuron in the feed-forward artificial neural network can respond to the input image.
  • a convolutional neural network (CNN) 400 may include an input layer 410, a convolutional layer/pooling layer 420 (the pooling layer is optional), and a neural network layer 430.
  • the convolutional layer/pooling layer 420 may include layers 421-426.
  • layer 421 is a convolutional layer
  • layer 422 is a pooling layer
  • layer 423 is a convolutional layer.
  • Build layers, 424 layers are pooling layers
  • 425 are convolutional layers
  • 426 are pooling layers
  • 421 and 422 are convolutional layers
  • 423 are pooling layers
  • 424 and 425 are convolutional layers.
  • Layer, 426 is the pooling layer. That is, the output of the convolutional layer can be used as the input of the subsequent pooling layer, or as the input of another convolutional layer to continue the convolution operation.
  • the convolution layer 421 can include many convolution operators.
  • the convolution operator is also called a kernel. Its role in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution operator is essentially It can be a weight matrix. This weight matrix is usually predefined. In the process of convolution on the image, the weight matrix is usually one pixel after one pixel (or two pixels after two pixels) along the horizontal direction on the input image. ...It depends on the value of stride) to complete the work of extracting specific features from the image.
  • the size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix and the depth dimension of the input image are the same.
  • the weight matrix will extend to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will produce a single depth dimension convolution output, but in most cases, a single weight matrix is not used, but multiple weight matrices with the same size (row ⁇ column) are applied. That is, multiple homogeneous matrices.
  • the output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" mentioned above.
  • Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract edge information of the image, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to eliminate unwanted noise in the image.
  • the multiple weight matrices have the same size (row ⁇ column), and the feature maps extracted by the multiple weight matrices of the same size have the same size, and then the multiple extracted feature maps of the same size are combined to form a convolution operation. Output.
  • weight values in these weight matrices need to be obtained through a lot of training in practical applications.
  • Each weight matrix formed by the weight values obtained through training can be used to extract information from the input image, so that the convolutional neural network 400 can make correct predictions. .
  • the initial convolutional layer (such as 421) often extracts more general features, which can also be called low-level features; with the convolutional neural network
  • the features extracted by the subsequent convolutional layers (for example 426) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.
  • the layers 421-426 as illustrated by 420 in Figure 5 can be a convolutional layer followed by a layer.
  • the pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers.
  • the sole purpose of the pooling layer is to reduce the size of the image space.
  • the pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to obtain an image with a smaller size.
  • the average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of the average pooling.
  • the maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling.
  • the operators in the pooling layer should also be related to the image size.
  • the size of the image output after processing by the pooling layer can be smaller than the size of the image input to the pooling layer, and each pixel in the image output by the pooling layer represents the average or maximum value of the corresponding sub-region of the image input to the pooling layer.
  • Neural network layer 430
  • the convolutional neural network 400 After being processed by the convolutional layer/pooling layer 420, the convolutional neural network 400 is not enough to output the required output information. Because as mentioned above, the convolutional layer/pooling layer 420 only extracts features and reduces the parameters brought by the input image. However, in order to generate the final output information (the required class information or other related information), the convolutional neural network 400 needs to use the neural network layer 430 to generate one or a group of required classes of output. Therefore, the neural network layer 430 may include multiple hidden layers (431, 432 to 43n as shown in FIG. 4) and an output layer 440. The parameters contained in the multiple hidden layers can be based on specific task types. Relevant training data of, is obtained through pre-training. For example, the task type can include image recognition, image classification, image super-resolution reconstruction, and so on.
  • the output layer 440 After the multiple hidden layers in the neural network layer 430, that is, the final layer of the entire convolutional neural network 400 is the output layer 440.
  • the output layer 440 has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error.
  • the convolutional neural network 400 shown in FIG. 5 is only used as an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.
  • the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #B.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #B (that is, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #B that is, an example of the first modulation parameter
  • the modulation parameter #B is trained to compensate for the distortion, it is possible to reduce the distortion of the image observed by the human eye due to the refraction of the light beam #1 through the lens 120 and the reflection of the screen #A.
  • the processor 130 may also save the mapping relationship between the image parameter (denoted as image parameter #A) of the image #A and the modulation parameter #B.
  • the image parameters of the multiple training images may be the same, so that the processor 130 may also include the same image parameter (for example, image parameter #A) Mapping relationship with modulation parameter #B.
  • multiple modulation parameters can be obtained by training based on training images with different image parameters (specifically, the distorted image and the expected image of the training image).
  • the process is similar to the process of determining the modulation parameter #B.
  • detailed descriptions are omitted.
  • the image parameters may include, but are not limited to, one or more of the following parameters:
  • Image size image color, image shape, image resolution.
  • the processor 130 may generate a one-to-one correspondence between the multiple image parameters and the multiple modulation parameters, denoted as correspondence #A (ie, an example of the first correspondence).
  • correspondence #A ie, an example of the first correspondence.
  • Table 1 shows the correspondence An example.
  • the processor 130 can determine the modulation parameter corresponding to the image parameter #1 from the corresponding relationship #A according to the image parameter of the image #1 (denoted as the image parameter #1) (denoted as the modulation parameter #1). ), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #1.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #1 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #1 ie, an example of the first modulation parameter
  • the processor 130 may also save the mapping relationship between the position parameter (denoted as the position parameter #A) of the image #A and the modulation parameter #B.
  • the position parameter of the image can be understood as the parameter of the position of the human eye relative to the screen when the human eye observes the image of the image.
  • the position parameters of the multiple training images may be the same, so that the processor 130 may also include the same position parameter (for example, the position parameter #A) Mapping relationship with modulation parameter #B.
  • multiple modulation parameters can be obtained by training based on training images with different position parameters (specifically, the distorted image and the expected image of the training image). This process is similar to the process of determining the modulation parameter #B.
  • the detailed description is omitted.
  • the image parameters may include, but are not limited to, one or more of the following parameters:
  • the distance between the human eye and the screen, the position of the human eye in the horizontal direction of the screen, and the position of the human eye in the vertical direction of the screen is the distance between the human eye and the screen, the position of the human eye in the horizontal direction of the screen, and the position of the human eye in the vertical direction of the screen.
  • the processor 130 may generate a one-to-one correspondence between the multiple position parameters and the multiple modulation parameters, which is denoted as correspondence #B (ie, an example of the second correspondence).
  • correspondence #B ie, an example of the second correspondence.
  • Table 2 shows the correspondence An example.
  • the processor 130 can determine the position parameter #1 from the corresponding relationship #A according to the position parameter of the image #1 (that is, the position parameter when the human eye observes the image #1, denoted as position parameter #1). 1 corresponds to the modulation parameter (denoted as modulation parameter #2), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #2.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #2 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #2 ie, an example of the first modulation parameter
  • the processor 130 may train to obtain a plurality of modulation parameters based on training images with different parameter groups (specifically, the distorted image and the expected image of the training image).
  • each parameter group includes a position parameter and an image parameter.
  • the processor 130 may generate a one-to-one correspondence relationship between the plurality of position groups and the plurality of modulation parameters, denoted as the correspondence relationship.
  • the following Table 3 shows an example of the correspondence relationship.
  • the processor 130 can determine the modulation parameter corresponding to the parameter group of the image #1 (denoted as, modulation Parameter #3), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #3.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #3 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #3 ie, an example of the first modulation parameter
  • the processor 130 may also save the mapping relationship between the screen parameter (denoted as screen parameter #A) of the image #A and the modulation parameter #B.
  • the screen parameters of the image can be understood as the parameters of the screen used when the image is imaged.
  • the screen parameters of the multiple training images may be the same, so that the processor 130 may also include the same screen parameter (for example, the screen parameter #A) Mapping relationship with modulation parameter #B.
  • multiple modulation parameters can be obtained by training based on training images with different screen parameters (specifically, the distorted image and the expected image of the training image). This process is similar to the process of determining the modulation parameter #B.
  • the detailed description is omitted.
  • the screen parameters may include, but are not limited to, one or more of the following parameters:
  • Screen shape screen thickness, screen material, screen refractive index, screen color.
  • the processor 130 may generate a one-to-one correspondence between the multiple screen parameters and the multiple modulation parameters, denoted as correspondence #C (ie, an example of the third correspondence).
  • correspondence #C ie, an example of the third correspondence.
  • Table 4 shows the correspondence An example.
  • the processor 130 can determine the screen parameter from the corresponding relationship #A according to the screen parameter of the image #1 (that is, the position parameter of the screen used when the image #1 is imaged, marked as screen parameter #1). For the modulation parameter corresponding to #1 (denoted as modulation parameter #4), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #4.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #4 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #4 ie, an example of the first modulation parameter
  • the processor 130 may train to obtain a plurality of modulation parameters based on training images with different parameter groups (specifically, the distortion image and the expected image of the training image).
  • each parameter group includes an image parameter and a screen parameter.
  • the processor 130 may generate a one-to-one correspondence between the multiple position groups and the multiple modulation parameters, denoted as the correspondence relationship.
  • Table 5 shows an example of the correspondence relationship.
  • the processor 130 can determine the modulation parameter corresponding to the parameter group of the image #1 (denoted as, Modulation parameter #5), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #5.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #5 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #5 ie, an example of the first modulation parameter
  • the processor 130 may train to obtain a plurality of modulation parameters based on training images with different parameter groups (specifically, the distorted image and the expected image of the training image).
  • each parameter group includes a position parameter and a screen parameter.
  • the processor 130 may generate a one-to-one correspondence between the multiple position groups and the multiple modulation parameters, denoted as the correspondence relationship.
  • the following Table 6 shows an example of the correspondence relationship.
  • the processor 130 can determine the modulation parameter corresponding to the parameter group of the image #1 (denoted as, modulation Parameter #6), the processor 130 may set the modulation parameter used by the spatial light modulator 110 as the modulation parameter #6.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #6 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #6 ie, an example of the first modulation parameter
  • the processor 130 may train to obtain a plurality of modulation parameters based on training images with different parameter groups (specifically, the distorted image and the expected image of the training image).
  • each parameter group includes a position parameter, an image parameter and a screen parameter.
  • the processor 130 may generate a one-to-one correspondence between the multiple position groups and the multiple modulation parameters, denoted as the correspondence relationship.
  • Table 7 shows an example of the correspondence relationship.
  • the processor 130 can determine the modulation parameter (record Do, modulation parameter #7), the processor 130 may set the modulation parameter used by the spatial light modulator 110 to the modulation parameter #7.
  • the spatial light modulator 110 may modulate the electrical domain data based on the modulation parameter #7 (ie, an example of the first modulation parameter) to generate the light beam of the image #1.
  • the modulation parameter #7 ie, an example of the first modulation parameter
  • FIG. 6 shows a schematic diagram of an example of the process of determining the modulation parameter. The difference from the process shown in FIG. 3 is that the processor 130 may send the training data to the server after acquiring the training data.
  • the training data may include distortion images and expected images of one or more training images, for example, image #B and image #C.
  • the training data may include the distortion image of the training image.
  • the server may determine the expected image of the training image according to the distortion image of the training image.
  • the server can train (or determine) to obtain one or more modulation parameters (for example, the aforementioned modulation parameter #B) according to the training data.
  • the training process may be similar to the process performed by the processor 130 in the above method 200. Here, in order to avoid redundant description, detailed descriptions thereof are omitted.
  • the server sends the trained modulation parameters to the processor 130.
  • the processor 130 can determine the modulation parameter that the spatial light modulator 110 needs to use according to the modulation parameter fed back by the server when processing the image #1.
  • the above-listed solutions for obtaining modulation parameters by the processor 130 are only exemplary, and the above training process may also be pre-configured in the processor 130 (in other words, the processor 130 can be accessed through experiments or training) before leaving the factory.
  • the training process may be similar to the process performed by the processor 130 in the above method 200, here, in order to avoid repetitive description, the detailed description is omitted.
  • an embodiment of the present application also provides an imaging device 500.
  • the imaging device 500 includes a processor 510, which is coupled with a memory 520, the memory 520 is used to store computer programs or instructions or and/or data, and the processor 510 is used to execute the computer programs or instructions and/or data stored in the memory 520,
  • the method in the above method (specifically, the method executed by the processor 130) embodiment is executed.
  • the imaging device 500 includes one or more processors 510.
  • the communication device 500 may further include a memory 520.
  • the memory 520 may be one or more.
  • the memory 520 may be integrated with the processor 510 or provided separately.
  • the imaging device 500 may further include a transceiver 530, and the transceiver 530 is used for receiving and/or transmitting signals.
  • the processor 510 is configured to control the transceiver 530 to receive and/or send signals.
  • the imaging device 500 is used to implement the operations performed by the processor 130 in the above method embodiments.
  • the embodiment of the application does not specifically limit the specific structure of the execution body of the method provided in the embodiment of the application, as long as the program that records the code of the method provided in the embodiment of the application can be executed according to the method provided in the embodiment of the application. Just proceed.
  • computer-readable media may include, but are not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, compact discs (CDs), digital versatile discs (digital versatile discs, DVDs), etc.) ), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.).
  • magnetic storage devices for example, hard disks, floppy disks, or tapes, etc.
  • optical disks for example, compact discs (CDs), digital versatile discs (digital versatile discs, DVDs), etc.
  • smart cards and flash memory devices for example, erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.
  • the various storage media described herein may represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, but is not limited to: wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • processors mentioned in the embodiments of this application may be a central processing unit (CPU), or may also be other general-purpose processors, digital signal processors (DSP), or application specific integrated circuits ( application specific integrated circuit (ASIC), ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the memory mentioned in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM).
  • RAM can be used as an external cache.
  • RAM can include the following various forms: static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM) , Double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synchlink DRAM, SLDRAM) and Direct RAM Bus RAM (DR RAM).
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM synchronous DRAM
  • Double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced SDRAM enhanced synchronous dynamic random access memory
  • SLDRAM Direct RAM Bus RAM
  • the processor when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component, the memory (storage module) can be integrated in the processor. It should also be noted that the memories described herein are intended to include, but are not limited to, these and any other suitable types of memories.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application essentially or contribute to the existing technology or parts of the technical solutions can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Including several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请提供一种成像方法和光学成像***,应用于AR HUD领域,该***包括空间光调制器,空间光调制器用于对图像的电信号进行调制以生成图像的成像光束,成像光束经过透镜和屏幕射入至人眼,该方法包括:获取第一调制参数,第一调制参数是根据第一畸变像和第一目标像确定的,第一畸变像是训练图像经过光学成像***和屏幕的成像处理而呈现的像,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内;控制空间光调制器基于第一调制参数对待成像的第一图像的电信号进行调制,能够在减小图像的像发生畸变的同时,避免因在电域对图像进行补偿处理而造成的成像时延,能够减小因在电域对图像进行补偿处理而增加的处理成本。

Description

成像方法、成像装置、光学成像***及车辆
本申请要求于2020年5月15日提交中国国家知识产权局、申请号为202010410182.5、发明名称为“成像方法、成像装置、光学成像***及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,并且更具体地,涉及成像方法、成像装置、光学成像***及车辆。
背景技术
目前,已知一种光学成像***,通过例如图像处理单元等器件在电域对待成像的图像进行处理生成电信号,通过光调制器将该电信号进行处理生成该图像的光束,并经由一个或多个透镜射出,该光束经过屏幕的反射而射入人眼,从而使人眼观察到该图像的像。
该技术在例如,抬头显示(Heads Up Display,HUD)或增强现实抬头显示(Augmented Reality,AR)HUD中得到广泛应用,即,通过车辆或飞机等交通工具的风挡玻璃作为上述屏幕,从而将图像反射至驾驶员的人眼。
但是,在现有技术中,如果屏幕(例如,上述风挡玻璃)与光学成像***离轴配置,则导致该屏幕反射至人眼的像发生畸,从而在很大程度上限制图像的显示质量。
目前已知一种技术,通过在电域,对图像进行补偿处理,从而减小人眼观察到该图像的像畸变。
但是,该现有技术需要对每一图像进行补偿处理,在例如图像像素较高或者视频帧率较高的情况下,大大增大该补偿处理的时延进而增大了成像时延,并且对图像处理单元等的性能要求较高,增大了处理成本。
发明内容
本申请提供一种成像方法、成像装置、光学成像***及车辆,能够减小成像时延,降低处理成本。
第一方面,提供了一种成像方法,其特征在于,应用于光学成像***,所述***包括空间光调制器和至少一个透镜,所述空间光调制器用于对图像的电信号进行调制以生成所述图像的成像光束,所述成像光束经过所述透镜和屏幕射入至人眼,其中,所述屏幕相对于所述光学成像***离轴配置,所述方法包括:
获取第一调制参数,所述第一调制参数是根据第一畸变像和第一目标像确定的,所述第一畸变像是训练图像经过所述光学成像***和所述屏幕的成像处理而呈现的像,,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内;控制所述空间光调制器基于所述第一调制参数对待成像的第一图像的电信号进行调制。
具体地说,所述第一畸变像是训练图像的电信号经过基于原始调制参数的调制和所述屏幕的反射而呈现的像。
所述第一目标像时所述图像的光束在经过所述屏幕的成像时希望使人眼观察到的像。
在一种实现方式中,所述第一目标像的图像是根据所述第一畸变像的图像确定的,例如, 所述第一目标像的图像包括所述第一畸变像的图像中畸变程度在预设范围内的像素区域。
根据本申请的方案,通过基于发生畸变的像和期待呈现的像确定空间光调制器的第一调制参数,并根据该第一调制参数对待呈现的图像的电信号进行调制,能够在减小该待呈现的图像的像发生畸变的同时,避免因在电域对图像进行补偿处理而造成的较大的成像时延,并且,能够减小因在电域对图像进行补偿处理而增加的处理成本。
其中,所述屏幕相对于所述光学成像***(例如,所述至少一个透镜)离轴配置。
由于在屏幕相对于所述光学成像***离轴配置时发生畸变较大,因此,根据本申请的方案能够适用于所述屏幕相对于所述光学成像***离轴配置的场景。
在一种实现方式中,该方法由光学成像***中的处理器执行。
在另一种实现方式中,该方法由光学成像***中的空间光调制器执行。
在另一种实现方式中,该方法由空间光调制器中的控制模块执行。
可选地,所述获取第一调制参数包括:控制所述空间光调制器基于原始调制参数对所述训练图像的电信号进行调制,以获取所述第一畸变像;调节所述原始调制参数,以使所述第一畸变像与所述第一目标像之间的偏差在预设范围内;将经过所述调节的原始调制参数确定为第一调制参数。
通过由光学成像***中的模块完成第一调制参数的确定过程,能够实现在线处理,从而能够应对各种不同环境下的应用,提高实用性和可靠性。
可选地,所述获取第一参数包括:向服务器发送所述第一畸变像的图像;从所述服务器获取所述第一调制参数。
可选地,所述方法还包括向所述服务器发送所述训练图像。
通过由第三方服务器完成第一调制参数的确定过程,能够降低因确定第一参数而造成的设备负担,并且能够减小因在线确定第一调制参数而产生的成像时延。
可选地,所述方法还包括:获取包括所述第一调制参数在内的K个调制参数与K个图像参数的第一对应关系,其中,第k个调制参数是根据具有第k个图像参数的训练图像的畸变像和目标像确定的,所述第k个调制参数与所述第k个图像参数对应,K≥2,k∈[1,K];以及所述获取第一调制参数包括:根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数
其中,所述K个调制参数中的任意两个调制参数的值不同,所述K个图像参数中的任意两个图像参数的值不同。
或者说,所述方法还包括:获取包括所述第一调制参数在内的多个调制参数与多个图像参数的第一对应关系,其中,每个调制参数是根据具有所述调制参数对应的图像参数的训练图像的畸变像和目标像确定的;以及所述获取第一调制参数包括:根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
具体地说,例如,当调制参数包括泽尼克系数时,所述K个调制参数中的任意两个调制参数的值不同可以理解为泽尼克系数的值不同。
当每个调制参数为包括多种参数的参数组时,所述K个调制参数中的任意两个调制参数的值不同可以理解为,两个调制参数组包括的参数中的至少一种参数的值不同。
例如,当所述图像参数包括图像大小时,所述K个图像参数中的任意两个图像参数中的图像大小的值不同。
当每个图像参数为包括多种参数的参数组时,所述K个图像参数中的任意两个图像参数的值不同可以理解为,两个图像参数组包括的参数中的至少一种参数的值不同。
由于具有不同图像参数的图像的产生的畸变可能不同,因此,本申请能够灵活应对具有不同图像参数的图像的成像需求,进一步提高本申请的实用性。
可选地,所述图像参数包括以下至少一种参数:图像大小、图像颜色、图像形状、图像分辨率。
可选地,所述方法还包括:获取包括所述第一调制参数在内的M个调制参数与M个位置参数的第二对应关系,所述位置参数用于指示人眼与所述屏幕之间的位置关系,其中,第m调制参数是根据所述训练图像在第m个位置参数下的畸变像和目标像确定的,所述第m个调制参数与所述第m个位置参数对应,所述M个调制参数中的任意两个调制参数的值不同,所述M个位置参数中的任意两个位置参数的值不同;以及所述获取第一调制参数包括:根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数
或者说,所述方法还包括:获取包括所述第一调制参数在内的多个调制参数与多个位置参数的第二对应关系,所述位置参数用于指示观察图像的像时人眼与所述屏幕之间的相对位置,其中,每个调制参数是根据具有所述调制参数对应的位置参数的训练图像的畸变像和目标像确定的;以及所述获取第一调制参数包括:根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数。
由于在不同位置观察图像的像时观察到畸变可能不同,因此,本申请能够灵活应对在不同位置观察的像需求,进一步提高本申请的实用性。
可选地,所述位置参数包括以下至少一种参数:人眼与光束在屏幕上的射入点的距离,人眼在屏幕上的投影在所述屏幕的水平方向上的位置,人眼在屏幕上的投影在所述屏幕的竖直方向上的位置。
例如,当所述位置参数包括人眼与光束在屏幕上的射入点的距离时,所述M个位置参数中的任意两个位置参数中的人眼与光束在屏幕上的射入点的距离的值不同。
当每个位置参数为包括多种参数的参数组时,所述M个位置参数中的任意两个位置参数的值不同可以理解为,两个位置参数组包括的参数中的至少一种参数的值不同。
可选地,所述方法还包括:获取包括所述第一调制参数在内的N个调制参数与N个屏幕参数的第三对应关系,其中,第n个调制参数是根据所述训练图像经过具有第n个屏幕参数的屏幕成像的畸变像和目标像确定的,所述第n个调制参数与所述第m个屏幕参数对应,所述N个调制参数中的任意两个调制参数的值不同,所述N个屏幕参数中的任意两个屏幕参数的值不同;以及所述获取第一调制参数包括:根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕
或者说,所述方法还包括:获取包括所述第一调制参数在内的多个调制参数与多个屏幕参数的第三对应关系,其中,每个调制参数是根据经过具有所述调制参数对应的屏幕参数的屏幕成像的畸变像和目标像确定的;以及所述获取第一调制参数包括:根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
由于具有不同屏幕参数的屏幕产生的畸变可能不同,因此,本申请能够灵活应使用具有不同屏幕参数的屏幕的场景内,进一步提高本申请的实用性。
可选地,所述屏幕参数包括以下至少一种参数:屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
例如,当所述屏幕参数包括屏幕形状时,所述N个屏幕参数中的任意两个位置参数中的 屏幕形状(或者说,屏幕形状对应的索引值)不同。
当每个屏幕参数为包括多种参数的参数组时,所述N个屏幕参数中的任意两个屏幕参数的值不同可以理解为,两个屏幕参数组包括的参数中的至少一种参数的值不同。
可选地,所述光学成像***配置于车辆,以及,所述屏幕包括所述车辆的风挡玻璃。
除了车辆以外,所述光学成像***配置于具有风挡玻璃等各种能够作为屏幕的装置的交通工具中,例如,火车、飞机或船舶等。
并且,除了风挡玻璃外,该屏幕可以为车窗等具有反射或折射功能的装置。
可选地,所述第一调制参数包括泽尼克系数。
第二方面,提供一种成像装置,包括处理器,所述处理器与存储器耦合,所述存储器用于存储计算机程序或指令,所述处理器用于执行存储器中的所述计算机程序或指令,使得所述处理器获取第一调制参数,所述第一调制参数是根据第一畸变像和第一目标像确定的,所述第一畸变像是训练图像经过空间光调制器的调制而生产的光束经过至少一个透镜和屏幕反射而呈现的像,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内,并用于控制所述空间光调制器基于所述第一调制参数对待成像的第一图像的电信号进行调制。
具体地说,所述第一畸变像是训练图像的电信号经过基于原始调制参数的调制和所述屏幕的反射而呈现的像。
根据本申请的方案,通过基于发生畸变的像和期待呈现的像确定空间光调制器的第一调制参数,并根据该第一调制参数对待呈现的图像的电信号进行调制,能够在减小该待呈现的图像的像发生畸变的同时,避免因在电域对图像进行补偿处理而造成的较大的成像时延,并且,能够减小因在电域对图像进行补偿处理而增加的处理成本。
其中,所述屏幕相对于光学成像***(例如,所述至少一个透镜)离轴配置。
由于在屏幕相对于所述光学成像***离轴配置时发生畸变较大,因此,根据本申请的方案能够适用于所述屏幕相对于所述光学成像***离轴配置的场景。
在一种实现方式中,所述处理器与由学成像***中的空间光调制器独立配置。
在另一种实现方式中,所述处理器配置在由空间光调制器中。
可选地,所述处理器还用于控制所述空间光调制器基于原始调制参数对所述训练图像的电信号进行调制,以获取所述第一畸变像;调节所述原始调制参数,以使所述第一畸变像与所述第一目标像之间的偏差在预设范围内;将经过所述调节的原始调制参数确定为第一调制参数。
从而能够实现在线处理,从而能够应对各种不同环境下的应用,提高实用性和可靠性。
可选地,所述装置还包括:收发器,用于向服务器发送所述训练图像和所述第一畸变像的图像,并用于从所述服务器获取所述第一调制参数。
通过由第三方服务器完成第一调制参数的确定过程,能够降低因确定第一参数而造成的设备负担,并且能够减小因在线确定第一调制参数而产生的成像时延。
可选地,所述处理器还用于获取包括所述第一调制参数在内的多个调制参数与多个图像参数的第一对应关系,其中,每个调制参数是根据具有所述调制参数对应的图像参数的训练图像的畸变像和目标像确定的,并根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
或者说,获取包括所述第一调制参数在内的K个调制参数与K个图像参数的第一对应关系,其中,第k个调制参数是根据具有第k个图像参数的训练图像的畸变像和目标像确定的, 所述第k个调制参数与所述第k个图像参数对应,K≥2,k∈[1,K],所述K个调制参数中的任意两个调制参数的值不同,所述K个图像参数中的任意两个图像参数的值不同。
由于具有不同图像参数的图像的产生的畸变可能不同,因此,本申请能够灵活应对具有不同图像参数的图像的成像需求,进一步提高本申请的实用性。
可选地,所述图像参数包括以下至少一种参数:图像大小、图像颜色、图像形状、图像分辨率。
可选地,所述处理器还用于获取包括所述第一调制参数在内的多个调制参数与多个位置参数的第二对应关系,所述位置参数用于指示观察图像的像时人眼与所述屏幕之间的相对位置,其中,每个调制参数是根据具有所述调制参数对应的位置参数的训练图像的畸变像和目标像确定的;并根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数。
或者说,所述处理器还用于获取包括所述第一调制参数在内的M个调制参数与M个位置参数的第二对应关系,所述位置参数用于指示人眼与所述屏幕之间的位置关系,其中,第m调制参数是根据所述训练图像在第m个位置参数下的畸变像和目标像确定的,所述第m个调制参数与所述第m个位置参数对应,所述M个调制参数中的任意两个调制参数的值不同,所述M个位置参数中的任意两个位置参数的值不同。
由于在不同位置观察图像的像时观察到畸变可能不同,因此,本申请能够灵活应对在不同位置观察的像需求,进一步提高本申请的实用性。
可选地,所述位置参数包括以下至少一种参数:人眼与光束在屏幕上的射入点的距离,人眼在屏幕上的投影在所述屏幕的水平方向上的位置,人眼在屏幕上的投影在所述屏幕的竖直方向上的位置。
可选地,所述处理器还用于获取包括所述第一调制参数在内的多个调制参数与多个屏幕参数的第三对应关系,其中,每个调制参数是根据经过具有所述调制参数对应的屏幕参数的屏幕成像的畸变像和目标像确定的;并根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
或者说,所述处理器还用于获取包括所述第一调制参数在内的N个调制参数与N个屏幕参数的第三对应关系,其中,第n个调制参数是根据所述训练图像经过具有第n个屏幕参数的屏幕成像的畸变像和目标像确定的,所述第n个调制参数与所述第m个屏幕参数对应,所述N个调制参数中的任意两个调制参数的值不同,所述N个屏幕参数中的任意两个屏幕参数的值不同。
由于具有不同屏幕参数的屏幕产生的畸变可能不同,因此,本申请能够灵活应使用具有不同屏幕参数的屏幕的场景内,进一步提高本申请的实用性。
可选地,所述屏幕参数包括以下至少一种参数:屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
可选地,所述第一调制参数包括泽尼克系数。
第三方面,提供一种光学成像***,包括第二方面及其各种实现方式的处理器、空间光调制器和至少一个透镜。
第四方面,提供一种光学成像***,包括:用于基于第一调制参数对待成像的第一图像的电信号进行调制以生成所述第一图像的成像光束,所述第一调制参数是根据第一畸变像和第一目标像确定的,所述第一畸变像是训练图像经过所述空间光调制器和屏幕的成像处理而 呈现的像,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内;至少一个透镜,用于对所述第一图像的成像光束进行折射。
可选地,所述空间光调制器具体用于基于原始调制参数对所述训练图像的电信号进行调制,以获取所述第一畸变像;所述光学成像***还包括:摄像设备,用于拍摄所述第一畸变图像;所述空间光调制器还用于从所述摄像设备获取所述第一畸变图像,并调节所述原始调制参数,以使所述第一畸变像与所述第一目标像之间的偏差在预设范围内,将经过所述调节的原始调制参数确定为第一调制参数。
可选地,所述光学成像***还包括:摄像设备,用于获取所述第一畸变像的图像;收发器,用于向服务器发送所述第一畸变像的图像,用于从所述服务器接收所述第一调制参数。
可选地,所述收发器还用于向所述服务器发送所述训练图像。
可选地,所述空间光调制器用于获取包括所述第一调制参数在内的K个调制参数与K个图像参数的第一对应关系,其中,第k个调制参数是根据具有第k个图像参数的训练图像的畸变像和目标像确定的,所述第k个调制参数与所述第k个图像参数对应,K≥2,k∈[1,K],所述K个调制参数中的任意两个调制参数的值不同,所述K个图像参数中的任意两个图像参数的值不同,并根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
或者说,所述空间光调制器用于获取包括所述第一调制参数在内的多个调制参数与多个图像参数的第一对应关系,其中,每个调制参数是根据具有所述调制参数对应的图像参数的训练图像的畸变像和目标像确定的,并根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
可选地,所述图像参数包括以下至少一种参数:图像大小、图像颜色、图像形状、图像分辨率。
可选地,所述空间光调制器用于获取包括所述第一调制参数在内的多个调制参数与多个位置参数的第二对应关系,所述位置参数用于指示观察图像的像时人眼与所述屏幕之间的相对位置,其中,每个调制参数是根据具有所述调制参数对应的位置参数的训练图像的畸变像和目标像确定的,并根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数。
或者说,所述空间光调制器用于获取包括所述第一调制参数在内的M个调制参数与M个位置参数的第二对应关系,所述位置参数用于指示人眼与所述屏幕之间的位置关系,其中,第m调制参数是根据所述训练图像在第m个位置参数下的畸变像和目标像确定的,所述第m个调制参数与所述第m个位置参数对应,所述M个调制参数中的任意两个调制参数的值不同,所述M个位置参数中的任意两个位置参数的值不同,并根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数
可选地,人眼与光束在屏幕上的射入点的距离,人眼在屏幕上的投影在所述屏幕的水平方向上的位置,人眼在屏幕上的投影在所述屏幕的竖直方向上的位置。
可选地,所述空间光调制器用于获取包括所述第一调制参数在内的多个调制参数与多个屏幕参数的第一对应关系,其中,每个调制参数是根据经过具有所述调制参数对应的屏幕参数的屏幕成像的畸变像和目标像确定的,并根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
或者说,所述空间光调制器用于获取包括所述第一调制参数在内的N个调制参数与N个 屏幕参数的第三对应关系,其中,第n个调制参数是根据所述训练图像经过具有第n个屏幕参数的屏幕成像的畸变像和目标像确定的,所述第n个调制参数与所述第m个屏幕参数对应,所述N个调制参数中的任意两个调制参数的值不同,所述N个屏幕参数中的任意两个屏幕参数的值不同,并根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
可选地,所述屏幕参数包括以下至少一种参数:屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
可选地,所述第一调制参数包括泽尼克系数。
第五方面,提供一种交通工具(例如,车辆),包括第三方面及其各种实现方式的光学成像***。
第六方面,提供一种交通工具(例如,车辆),包括第四方面及其各种实现方式的光学成像***。
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序(也可以称为代码,或指令),当所述计算机程序被运行时,使得计算机执行上述第一方面及其任一种可能实现方式中的方法。
第八方面,提供了一种计算机可读介质,所述计算机可读介质存储有计算机程序(也可以称为代码,或指令)当其在计算机上运行时,使得计算机执行上述第一方面及其任一种可能实现方式中的方法。
第九方面,提供了一种芯片***,包括存储器和处理器,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得安装有该芯片***的通信设备执行上述第一方面及其任一种可能实现方式中的方法。
其中,该芯片***可以包括用于发送信息或数据的输入电路或者接口,以及用于接收信息或数据的输出电路或者接口。
附图说明
图1是AR HUD***的成像过程的一例的示意图。
图2是本申请的光学成像***的示意图。
图3是本申请的成像方法的一例的示意***互图。
图4是本申请的神经网络设备的一例的结构图。
图5是本申请的神经网络的层次的一例的示意图。
图6是本申请的成像方法的另一例的示意***互图。
图7本申请的成像装置的一例的示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请的方案可以应用于光学成像***,该光学成像***可以包括但不限于抬头显示HUD***或AR HUD***。
具体地说,HUD也可以称为风挡玻璃仪表显示或平视显示,即,通过光学成像***可以把重要的信息映射在风窗玻璃上的全息半镜上,使驾驶员不必低头,就能看清重要的信息。
增强现实(Augmented Reality,AR)技术是一种将虚拟信息与真实世界巧妙融合的技术,广泛运用了多媒体、三维建模、实时跟踪及注册、智能交互、传感等多种技术手段,将计算 机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。
AR HUD可以理解为AR技术和HUD技术的融合,即,在驾驶人视线区域内合理、生动地叠加显示一些驾驶信息,并结合于实际交通路况中。
图1示出了本申请的成像过程的示意图。如图1所示,从光学成像***发出的待成像的图像的光束,该光束经过风挡玻璃(或者说,风窗玻璃)的反射(或折射)射入人眼,从而使人眼观察到该图像的像,例如,当人眼和光学成像***位于风挡玻璃的同侧,该像为虚像。当人眼和光学成像***分别位于风挡玻璃的两侧,该像为实像。
通常情况下,该风挡玻璃相对于光学成像***(或者说,光学成像***的透镜)离轴配置,因此,人眼观察到的图像的像会发生畸变。
图2示出了本申请的光学成像***的一例的示意图。如图2所示,该光学成像***100包括:
空间光调制器110;
至少一个透镜120;
处理器130。
空间光调制器(Spatial Light Modulator,SLM)是一类能将信息加载于一维或两维的光学数据场上,以便有效的利用光的固有速度、并行性和互连能力的器件,能够实现对二维空间各点光强进行调制。
SLM可在随时间变化的电驱动信号或其他信号的控制下,改变空间上光分布的振幅或强度、相位、偏振态以及波长,或者把非相干光转化成相干光。
SLM在主动控制下,可以通过液晶分子调制光场的某个参量,例如通过调制光场的振幅,通过折射率调制相位,通过偏振面的旋转调制偏振态,或是实现非相干到相干光的转换,从而将一定的信息写入光波中,达到光波调制的目的。它可以方便地将信息加载到一维或二维的光场中,利用光的宽带宽,多通道并行处理等优点对加载的信息进行快速处理。
SLM可以包括多个独立单元,他们在空间上排列成一维或二维阵列,每个单元都可以独立地接收光学信号或电学信号的控制,并按此信号改变自身的光学性质,从而对照明在其上的光波进行调制。
在本申请中,空间光调制器110可以从电域的处理单元,例如,图像处理单元(graphics processing unit,GPU)或中央处理器(central processing unit,CPU)获取待成像的图像的数据(或者说,电信号),并将该数据调制至光束上,从而形成该图像的光束。
该光束经过透镜120的折射后从空间光调制器110射出。
在本申请中,可以调节该空间光调制器110的某些调制参数,从而可以调节从该空间光调制器110射出的光束形成的像的参数,例如,像的形状,大小等。
作为示例而非限定,该调制参数可以包括但不限于泽尼克系数(zernike coefficient),或者说,泽尼克多项式(Zernike Polynomials)中的系数。
具体地说,像差是指光学***中的成像缺陷。几何光学上把像差(几何像差)分为单色光像差和色光像差,单色光像差包括球差、彗差、像散、场曲和畸变,色光像差包括位置色差和倍率色差;而物理光学上把像差称之为波前像差或波阵面像差,即是点光源发出的球面波经光学***后形成的波形与理想球面波之间的距离。波前像差可以通过泽尼克多项式周期表或球差、彗差等几何像差来表达。畸变可以理解为方形物体通过光学***后周边各点产生了不同棱镜像移所致。
光学主反射镜是空问光学遥感器的组成部分,它的镜面面形精度是影响空间光学遥感器分辨率的重要因素之一。在地面装调过程中,反射镜在光轴水平和光轴竖直两种状态下,由于引力场的作用,反射镜的镜面将会发生变形,因此在进行光学反射镜设计时需要做镜面变形分析,以检验所设计的光学反射镜是否满足面形精度要求。镜面变形包括刚***移和表面变形,刚***移会引起光学***像倾斜、离轴和离焦,表面变形将影响光学***的波前差。刚***移可以通过调整光学元件之间的相对位置来消除,而表面变形无法消除。因此镜面变形中的表面变形能够真实反映光学反射镜的面形精度。以自由曲面镜面为例,通过有限元分析得到面形数据,用泽尼克多项式对变形后的面形进行精确拟合,并分离出刚***移部分得到表面变形云图,计算出表面变形的表面变形均方根和表面变形最大值与最小值之差。
应理解,以上列举的调制参数的具体例仅为示例性说明,本申请并未限定于此,其他能够改变从空间光调制器射出的光束所形成的像的形状、大小等的参数,均落入本申请的保护范围内,例如,该调制参数还可以包括空间光调制器的用于控制光束的振幅、强度、相位、偏振态以及波长等的参数。
从而,在本申请中,可以通过调节上述调制参数,减小人眼观察到的像的畸变。
具体地说,该调制参数可以是经过训练的、能够减小人眼观察到的像的畸变的参数,处理器130可以获取该调制参数,并控制上述空间光调制器110使用该调制参数对待成像的图像进行光调制。
需要说明的是,图2所示的光学成像***100的结构仅为示例性说明,本申请并未限定于此,例如,该处理器130可以集成在空间光调制器110中,或者,该处理器130的功能可以由空间光调制器110中能够实现计算功能的器件或模块实现。
可选地,该光学成像***100还可以包括摄像装置140。
该摄像装置140用于拍摄畸变像,该畸变像用于调制参数的确定,随后对该过程进行详细说明。
该摄像装置140的配置位置与人眼的位置相对于,以使拍摄的畸变像与人眼观察到的畸变像相同或近似相同。
该摄像装置140可以是可拆卸的,从而,在完成畸变像的拍摄后,可以拆卸下该摄像装置140。
该光学成像***100还可以包括现有技术中光学成像***所包括的其他器件,这里,为了避免赘述,省略其详细说明。
下面,对该调制参数的确定(或者说,调节)过程,进行详细说明。
图3示出了该调制参数的确定过程的一例的示意图,如图3所示,在S210,空间光调制器110可以从例如,GPU等电域图形处理器获取图像#A(即,训练图像的一例)的电域数据(或者说,电域信号)。
在S220,空间光调制器110可以基于调制参数#A(即,原始调制参数的一例)对该电域数据进行调制,以生成该图像#A的光束,以下,为了便于理解,称为光束#A。
其中,该调制参数#A可以是默认参数或出厂设置参数,或者,该调制参数#A也可以是处理器130为空间光调制器110配置的参数。
该光束#A经过透镜120的折射以及屏幕#A的反射(或折射)后射入拍摄装置140,从而,拍摄装置140能够拍摄到该图像#A的畸变像,具体地说,是畸变像的图像,以下,为了便于理解和区分,记做图像#B。
在S230,处理器130从拍摄装置140获取该图像#B。
并且,在S240,处理器130可以确定该图像#A的期待像,具体地说,是期待像的图像,以下,为了便于理解和区分,记做图像#C。其中,该图像#A的期待像可以理解为人眼观察到的图像#A的不发生畸变或者畸变程度在预设范围内的像。
作为示例而非限定,该处理器130可以通过以下方式获取图像#A的期待像,或者说图像#C。
具体地说,在空间不同观测位置畸变映射关系是不同的,因此需要对有限的不同观测位置进行相应的标定,再采用插值的方式生成空间任意观测位置的畸变映射关系。同时我们希望在任意位置观测到的最终成像均为空间内某一固定的矩形视场,故需要在标定时针对每个观测位置选择合适的成像区域大小。
我们设定标定观测位置在空间同一水平直线上。在某一特定的观测位置,根据投影的标定点阵对应畸变成像,我们可以采用在畸变点阵内选取内接矩形的方式来确定该位置的最大矩形视场R大小。假设标定观测位置数目为N,对第i个位置,畸变点阵图像(例如,图像#B)最左列横坐标最大值为x_left_i,最右列横坐标最小值为x_right_i,最上行纵坐标最大值为y_up_i,最下行纵坐标最小值为y_down_i。此时矩形范围Ri=[x_left_i,x_right_i,y_up_i,y_down_i]内接于畸变点阵视场内部。
从而,该矩形范围Ri可以作为图像#A的期待图像(即,图像#C)。
另外,由于不同观测位置畸变量不同,拍摄相片(例如,图像#B)中不同Ri对应的空间实际位置将并不相同,所以需要在不同Ri内部找到公共的视场区域使之对应相同空间范围。因为Ri是由不同的观测位置拍摄的畸变点阵图像所确定的范围,故为了找到空间内公共的矩形视场范围,需要将不同的Ri移动到同一观测位置。在此以两个观测位置i,j为例,由于在同一水平直线上,只有水平位移Δx_ij,若位置j在位置i右侧Δx_ij为正值,否则为负值。则Rj在观测位置i处对应的观测结果Rj_i可由小孔成像模型算出。具体计算公式如下:
Rj_i=[x_left_j+f/Z*Δx_ij*ppi,x_right_j+f/Z*Δx*_ij*ppi,y_up_j,y_down_j]
其中f是相机焦距,Z是成像面到相机的距离,ppi为相机CCD面单位距离像素数。
在i位置处选取的公共矩形视场区域为R*=[max{x_left_i,x_left_j+f/Z*Δx_ij*ppi},min{x_right_i,x_right_j+f/Z*Δx_ij*ppi},max{y_up_i,y_up_j},min(y_down_i,y_down_j)]。
当观测位置大于2个时,只需将所有观测位置类似的移动到同一观测位置上如上类似取公共区域即可获取R*。
再获取R*后可以再由小孔成像模型还原在任意观测位置j上的矩形视场范围R*_j:
R*_j=[max{x_left_i,x_left_j+f/Z*Δx_ij*ppi}-f/Z*Δx_ij*ppi,min{x_right_i,x_right_j+f/Z*Δx_ij*ppi}-f/Z*Δx_ij*ppi,max{y_up_i,y_up_j},min(y_down_i,y_down_j)]。
在完成一条直线上多个观测点的映射关系标定后插值获取任意位置的畸变映射关系:
对于空间任意观测点k,首先获取观测位置投影所在标定观测点的区间。我们利用最相邻的两个已标定观测点的畸变映射表进行线性插值生成在位置k处的畸变映射表P_k。插值公式如下:
P_k=(1–α)*P_i+α*P_i+1;
其中,α=d1/(d1+d2),P_i和P_i+1分别为相邻两个已标定位置的畸变映射关系表。
在S250,处理器130可以基于该图像#B和图像#C训练得到调制参数#B。
例如,处理器130可以判定图像#B和图像#C之间的相似度是否满足预设条件,例如,同一位置的像素在图像#B和图像#C中的位置的偏差是否小于或等于预设的偏差值。
如果判定为是,则可以将该调制参数#A确定为调制参数#B。
如果判定否,则可以执行以下操作:
在步骤a,处理器130可以基于规定的调节方向和调节步长在调制参数#A的基础上进行调节,得到调制参数#C。
在步骤b,空间光调制器110可以基于调制参数#C对该图像#A电域数据进行调制,以生成该图像#A的光束,以下,为了便于理解,称为光束#B。
在步骤c,该光束#A经过透镜120的折射以及屏幕的反射(或折射)后射入拍摄装置140,从而,拍摄装置140能够拍摄到该图像#A的畸变像,具体地说,是畸变像的图像,以下,为了便于理解和区分,记做图像#D。
在步骤d,处理器130从拍摄装置140获取该图像#D。
在步骤e,处理器130可以判定图像#D和图像#C之间的相似度是否满足预设条件,例如,同一位置的像素在图像#D和图像#C中的位置的偏差是否小于或等于预设的偏差值。
如果判定为是,则可以将该调制参数#C确定为调制参数#B。
如果判定为否,则重复上述步骤a~步骤e,至到基于调节后的调制参数(记做调制参数#X)生成的光束所形成畸变像的图像(记做图像#X)和图像#C之间的相似度满足预设条件,从而,可以将该调制参数#X确定为调制参数#B。
应理解,以上列举的基于该图像#B和图像#C训练得到调制参数#B的过程仅为示例性说明,本申请并未限定于此,例如,还可以将该图像#B和图像#C作为神经网络模型的输入数据,该神经网络模型的输出数据为调制参数#B。
下面,对该神经网络模型的训练过程进行详细说明。
为了便于理解,下面先对本申请实施例涉及的相关术语及神经网络等相关概念进行介绍。
1.神经网络
神经网络可以是由神经单元组成的,神经单元可以是指以x s和截距1为输入的运算单元,该运算单元的输出可以为:
Figure PCTCN2021092925-appb-000001
其中,s=1、2、……n,n为大于1的自然数,W s为x s的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),用于将非线性特性引入神经网络中,来将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入。激活函数可以是sigmoid函数。神经网络是将许多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。
2.卷积神经网络
卷积神经网络是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器。该特征抽取器可以看作是滤波器,卷积过程可以看作是使用一个可训练的滤波器与一个输入的图像或者卷积特征平面(feature map)做卷积。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。这其中隐含的原理是:图像的某一部分的统计信息与其他部分是一样的。即意味着在某一部分学习的图像信息 也能用在另一部分上。所以对于图像上的所有位置,都能使用同样的学习得到的图像信息。在同一卷积层中,可以使用多个卷积核来提取不同的图像信息,一般地,卷积核数量越多,卷积操作反映的图像信息越丰富。
卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。
3.反向传播算法
卷积神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的超分辨率模型中参数的大小,使得超分辨率模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的超分辨率模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的超分辨率模型的参数,例如权重矩阵。
下面,对该神经网络模型的训练过程进行说明。
首先,介绍本申请实施例提供的神经网络模型训练的***架构。参见附图4,本申请实施例提供了一种***架构300。如所述***架构300所示,数据采集设备360(例如,拍摄装置140和处理器130)用于采集训练数据,该训练数据的获取过程可以与上述图像#C和图像#B的获取过程相似,或者说,该图像#C和图像#B本身即可作为训练数据。
并且,数据采集设备360将训练数据存入数据库330,训练设备320基于数据库330中维护的训练数据训练得到神经网络模型301,即,调制参数#B对应的神经网络模型301。
需要说明的是,在实际的应用中,所述数据库330中维护的训练数据不一定都来自于数据采集设备360的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备320也不一定完全基于数据库330维护的训练数据进行神经网络模型301的训练,也有可能从云端或其他地方获取训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。
根据训练设备320训练得到的神经网络模型301可以应用于执行设备310。
在图4中,执行设备310配置有I/O接口312,用于与客户设备340(例如,处理器130或空间光调制器110)进行数据交互,客户设备340向I/O接口312输入数据(例如上述图像#C和图像#B)。
预处理模块313用于根据I/O接口312接收到的输入数据进行预处理,其中,该预处理的过程和方法可以与现有技术相似,这里,为了避免赘述,省略其详细说明。需要说明的是,在本申请中,也可以不对输入数据进行预处理,此情况下,***架构300也可以不包括预处理模块313。
计算模块311用于根据上述神经网络模型301对来自预处理模块313或者I/O接口312的输入数据执行计算等相关的处理。
需要说明的是,执行设备310可以调用数据存储***350中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储***350中。
最后,I/O接口312将处理结果,如上述得到的调制参数#B返回给客户设备340。
在图4中所示情况下,用户可以手动给定输入数据,该手动给定可以通过I/O接口312提供的界面进行操作。另一种情况下,客户设备340可以自动地向I/O接口312发送输入数据,如果要求客户设备340自动发送输入数据需要获得用户的授权,则用户可以在客户设备340中设置相应权限。用户可以在客户设备340查看执行设备310输出的结果,具体的呈现形式可以是显示、声音、动作等具体方式。客户设备340也可以作为数据采集端,采集如图 4所示输入I/O接口312的输入数据及输出I/O接口312的输出结果作为新的样本数据,并存入数据库330。当然,也可以不经过客户设备340进行采集,而是由I/O接口312直接将如图4所示输入I/O接口312的输入数据及输出I/O接口312的输出结果,作为新的样本数据存入数据库330。
值得注意的是,图4仅是本申请实施例提供的一种***架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图4中,数据存储***350相对执行设备310是外部存储器,在其它情况下,也可以将数据存储***350置于执行设备310中。
本申请的神经网络可以包括但不限于卷积神经网络CNN,卷积神经网络是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。
如图5所示,卷积神经网络(CNN)400可以包括输入层410,卷积层/池化层420(其中池化层为可选的),以及神经网络层430。
卷积层/池化层420:
卷积层:
如图5所示卷积层/池化层420可以包括如示例421-426层,举例来说:在一种实现中,421层为卷积层,422层为池化层,423层为卷积层,424层为池化层,425为卷积层,426为池化层;在另一种实现方式中,421、422为卷积层,423为池化层,424、425为卷积层,426为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。
下面将以卷积层421为例,介绍一层卷积层的内部工作原理。
卷积层421可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的特征图的尺寸也相同,再将提取到的多个尺寸相同的特征图合并形成卷积运算的输出。
这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络400进行正确的预测。
当卷积神经网络400有多个卷积层的时候,初始的卷积层(例如421)往往提取较多的 一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络400深度的加深,越往后的卷积层(例如426)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。
池化层:
由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,在如图5中420所示例的421-426各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。
神经网络层430:
在经过卷积层/池化层420的处理后,卷积神经网络400还不足以输出所需要的输出信息。因为如前所述,卷积层/池化层420只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络400需要利用神经网络层430来生成一个或者一组所需要的类的数量的输出。因此,在神经网络层430中可以包括多层隐含层(如图4所示的431、432至43n)以及输出层440,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。
在神经网络层430中的多层隐含层之后,也就是整个卷积神经网络400的最后层为输出层440,该输出层440具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络400的前向传播(如图5由410至440方向的传播为前向传播)完成,反向传播(如图5由440至410方向的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络400的损失,及卷积神经网络400通过输出层输出的结果和理想结果之间的误差。
需要说明的是,如图5所示的卷积神经网络400仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在。
在S260,处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#B。
从而,当空间光调制器110可以从例如,GPU等电域图形处理器获取新的图像(即,第一图像的一例,记做,图像#1)的电域数据(或者说,电域信号)时,空间光调制器110可以基于调制参数#B(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束,以下,为了便于理解,称为光束#1。
由于上述调制参数#B经过训练,能够对畸变进行补偿,因此,能够减小光束#1经过透镜120的折射以及屏幕#A的反射而被人眼观察到的像的畸变。
应理解,以上列举的对图像1的处理过程仅为示例性说明,本申请并未限定于此,例如,还可以采用以下一种或多种方式。
方式a
具体地说处理器130还可以保存该图像#A的图像参数(记做,图像参数#A)与该调制 参数#B的映射关系。
或者说,在该调制参数#B是基于多个训练图像训练得到的情况下,该多个训练图像的图像参数可以相同,从而,处理器130还可以包括该相同的图像参数(例如,图像参数#A)与调制参数#B的映射关系。
类似地,可以分别基于具有不同图像参数的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。该过程与该调制参数#B的确定过程相似,这里,为了避免赘述,省略其详细说明。
作为示例而非限定,该图像参数可以包括但不限于以下一种或多种参数:
图像大小、图像颜色、图像形状、图像分辨率。
从而,处理器130可以生成该多个图像参数与多个调制参数的一一对应关系,记做,对应关系#A(即,第一对应关系的一例),以下表1示出了该对应关系的一例。
表1
图像参数#A 调制参数#B
…… ……
图像参数#M 调制参数#N
从而,处理器130可以根据图像#1的图像参数(记做,图像参数#1),并从该对应关系#A中确定与该图像参数#1对应的调制参数(记做,调制参数#1),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#1。
进而,空间光调制器110可以基于调制参数#1(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
方式b
具体地说处理器130还可以保存图像#A的位置参数(记做,位置参数#A)与该调制参数#B的映射关系。
其中,图像的位置参数可以理解为,人眼观察该图像的像时,该人眼相对于屏幕的位置的参数。
或者说,在该调制参数#B是基于多个训练图像训练得到的情况下,该多个训练图像的位置参数可以相同,从而,处理器130还可以包括该相同的位置参数(例如,位置参数#A)与调制参数#B的映射关系。
类似地,可以分别基于具有不同位置参数的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。该过程与该调制参数#B的确定过程相似,这里,为了避免赘述,省略其详细说明。
作为示例而非限定,该图像参数可以包括但不限于以下一种或多种参数:
人眼与屏幕的距离,人眼在屏幕的水平方向上的位置,人眼在屏幕的竖直方向上的位置。
从而,处理器130可以生成该多个位置参数与多个调制参数的一一对应关系,记做,对应关系#B(即,第二对应关系的一例),以下表2示出了该对应关系的一例。
表2
位置参数#A 调制参数#B
…… ……
位置参数#M 调制参数#N
从而,处理器130可以根据图像#1的位置参数(即,人眼观察图像#1时的位置参数,记做,位置参数#1),并从该对应关系#A中确定与该位置参数#1对应的调制参数(记做,调制 参数#2),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#2。
进而,空间光调制器110可以基于调制参数#2(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
应理解,以上列举的方式a和方式b可以单独使用也可以联合使用。
例如,具体地说处理器130可以分别基于具有不同参数组的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。
其中,每个参数组包括一个位置参数和一个图像参数。
进而,处理器130可以生成该多个位置组与多个调制参数的一一对应关系,记做,对应关系,以下表3示出了该对应关系的一例。
表3
图像参数#A 位置参数#A 调制参数#B
…… …… ……
图像参数#M 位置参数#M 调制参数#N
从而,处理器130可以根据图像#1的参数组(即,图像1的位置参数和图像参数),并从该对应关系中确定与该图像#1的参数组对应的调制参数(记做,调制参数#3),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#3。
进而,空间光调制器110可以基于调制参数#3(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
方式c
具体地说处理器130还可以保存图像#A的屏幕参数(记做,屏幕参数#A)与该调制参数#B的映射关系。
其中,图像的屏幕参数可以理解为,该图像在成像时使用的屏幕的参数。
或者说,在该调制参数#B是基于多个训练图像训练得到的情况下,该多个训练图像的屏幕参数可以相同,从而,处理器130还可以包括该相同的屏幕参数(例如,屏幕参数#A)与调制参数#B的映射关系。
类似地,可以分别基于具有不同屏幕参数的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。该过程与该调制参数#B的确定过程相似,这里,为了避免赘述,省略其详细说明。
作为示例而非限定,该屏幕参数可以包括但不限于以下一种或多种参数:
屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
从而,处理器130可以生成该多个屏幕参数与多个调制参数的一一对应关系,记做,对应关系#C(即,第三对应关系的一例),以下表4示出了该对应关系的一例。
表4
屏幕参数#A 调制参数#B
…… ……
屏幕参数#M 调制参数#N
从而,处理器130可以根据图像#1的屏幕参数(即,图像#1成像时使用的屏幕的位置参数,记做,屏幕参数#1),并从该对应关系#A中确定与该屏幕参数#1对应的调制参数(记做,调制参数#4),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#4。
进而,空间光调制器110可以基于调制参数#4(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
应理解,以上列举的方式a和方式c可以单独使用也可以联合使用,或者,以上列举的方式b和方式c可以单独使用也可以联合使用,或者,以上列举的方式a、方式b和方式c可以联合使用。
例如,处理器130可以分别基于具有不同参数组的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。
其中,每个参数组包括一个图像参数和一个屏幕参数。
进而,处理器130可以生成该多个位置组与多个调制参数的一一对应关系,记做,对应关系,以下表5示出了该对应关系的一例。
表5
图像参数#A 屏幕参数#A 调制参数#B
…… …… ……
图像参数#M 屏幕参数#M 调制参数#N
从而,处理器130可以根据图像#1的参数组(即,图像1的位图像参数和屏幕参数),并从该对应关系中确定与该图像#1的参数组对应的调制参数(记做,调制参数#5),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#5。
进而,空间光调制器110可以基于调制参数#5(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
再例如,处理器130可以分别基于具有不同参数组的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。
其中,每个参数组包括一个位置参数和一个屏幕参数。
进而,处理器130可以生成该多个位置组与多个调制参数的一一对应关系,记做,对应关系,以下表6示出了该对应关系的一例。
表6
位置参数#A 屏幕参数#A 调制参数#B
…… …… ……
位置参数#M 屏幕参数#M 调制参数#N
从而,处理器130可以根据图像#1的参数组(即,图像1的位置参数和屏幕参数),并从该对应关系中确定与该图像#1的参数组对应的调制参数(记做,调制参数#6),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#6。
进而,空间光调制器110可以基于调制参数#6(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
再例如,处理器130可以分别基于具有不同参数组的训练图像(具体地说,是训练图像的畸变像和期待像)训练得到多个调制参数。
其中,每个参数组包括一个位置参数、一个图像参数和一个屏幕参数。
进而,处理器130可以生成该多个位置组与多个调制参数的一一对应关系,记做,对应关系,以下表7示出了该对应关系的一例。
表7
图像参数#A 位置参数#A 屏幕参数#A 调制参数#B
…… …… …… ……
图像参数#M 位置参数#M 屏幕参数#M 调制参数#N
从而,处理器130可以根据图像#1的参数组(即,图像1的图像参数、位置参数和屏幕参数),并从该对应关系中确定与该图像#1的参数组对应的调制参数(记做,调制参数#7),处理器130可以将空间光调制器110使用的调制参数设置为该调制参数#7。
进而,空间光调制器110可以基于调制参数#7(即,第一调制参数的一例)对该电域数据进行调制,以生成该图像#1的光束。
图6示出了该调制参数的确定过程的一例的示意图,与图3所示过程不同的是,处理器130在获取训练数据后,可以将该训练数据发送给服务器。
其中,该训练数据可以包括一个或多个训练图像的畸变像和期待像,例如,图像#B和图像#C。
或者,该训练数据可以包括该训练图像的畸变像,此情况下,服务器可以根据该训练图像的畸变像,确定训练图像的期待像。
从而,服务器可以根据训练数据,训练(或者说,确定)获得一个或多个调制参数(例如,上述调制参数#B)。并且,该训练过程可以与上述方法200中处理器130执行的过程相似,这里,为了避免赘述,省略其详细说明。
其后,服务器将所训练得到的调制参数发送给处理器130。
从而,处理器130在处理图像#1时可以根据服务器反馈的调制参数,确定空间光调制器110需要使用的调制参数。
应理解,以上列举的处理器130获得调制参数的方案仅为示例性说明,上述训练过程也可以在出厂之前通过实验或训练等方式预先配置在处理器130(或者说,处理器130能够访问的存储器)中,并且,该训练过程可以与上述方法200中处理器130执行的过程相似,这里,为了避免赘述,省略其详细说明。
如图7所示,本申请实施例还提供一种成像装置500。成像装置500包括处理器510,处理器510与存储器520耦合,存储器520用于存储计算机程序或指令或者和/或数据,处理器510用于执行存储器520存储的计算机程序或指令和/或者数据,使得上文方法(具体地说,是处理器130执行的方法)实施例中的方法被执行。
可选地,该成像装置500包括的处理器510为一个或多个。
可选地,如图7所示,该通信装置500还可以包括存储器520。具体地,存储器520可以为一个或多个。可选地,该存储器520可以与该处理器510集成在一起,或者分离设置。
如图7所示,该成像装置500还可以包括收发器530,收发器530用于信号的接收和/或发送。例如,处理器510用于控制收发器530进行信号的接收和/或发送。
其中,该成像装置500用于实现上文方法实施例中由处理器130执行的操作。
上述提供的任一种成像装置500中相关内容的解释及有益效果均可参考上文提供的对应的方法实施例,此处不再赘述。
本申请实施例并未对本申请实施例提供的方法的执行主体的具体结构进行特别限定,只要能够通过运行记录有本申请实施例提供的方法的代码的程序,以根据本申请实施例提供的方法进行处理即可。
本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本文中使用的术语“制品”可以涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only  memory,EPROM)、卡、棒或钥匙驱动器等)。
本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可以包括但不限于:无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
应理解,本申请实施例中提及的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者任何常规的处理器等。
还应理解,本申请实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM)。例如,RAM可以用作外部高速缓存。作为示例而非限定,RAM可以包括如下多种形式:静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。还需要说明的是,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种成像方法,其特征在于,应用于光学成像***,所述***包括空间光调制器和至少一个透镜,所述空间光调制器用于对待成像的图像的电信号进行调制以生成所述图像的成像光束,所述成像光束经过所述透镜和屏幕射入至人眼,所述方法包括:
    获取第一调制参数,所述第一调制参数是根据第一畸变像和第一目标像确定的,所述第一畸变像是训练图像经过所述光学成像***和所述屏幕的成像处理而呈现的像,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内;
    控制所述空间光调制器基于所述第一调制参数对待成像的第一图像的电信号进行调制。
  2. 根据权利要求1所述的方法,其特征在于,所述获取第一调制参数包括:
    控制所述空间光调制器基于原始调制参数对所述训练图像的电信号进行调制,以获取所述第一畸变像;
    调节所述原始调制参数,以使所述第一畸变像与所述第一目标像之间的偏差在预设范围内;
    将经过所述调节的原始调制参数确定为第一调制参数。
  3. 根据权利要求1所述的方法,其特征在于,所述获取第一调制参数包括:
    向服务器发送所述第一畸变像的图像;
    从所述服务器获取所述第一调制参数。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    获取包括所述第一调制参数在内的K个调制参数与K个图像参数的第一对应关系,其中,第k个调制参数是根据具有第k个图像参数的训练图像的畸变像和目标像确定的,所述第k个调制参数与所述第k个图像参数对应,K≥2,k∈[1,K],所述K个调制参数中的任意两个调制参数的值不同,所述K个图像参数中的任意两个图像参数的值不同;以及
    所述获取第一调制参数包括:
    根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
  5. 根据权利要求4所述的方法,其特征在于,所述图像参数包括以下至少一种参数:
    图像大小、图像颜色、图像形状、图像分辨率。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    获取包括所述第一调制参数在内的M个调制参数与M个位置参数的第二对应关系,所述位置参数用于指示人眼与所述屏幕之间的位置关系,其中,第m调制参数是根据所述训练图像在第m个位置参数下的畸变像和目标像确定的,所述第m个调制参数与所述第m个位置参数对应,所述M个调制参数中的任意两个调制参数的值不同,所述M个位置参数中的任意两个位置参数的值不同;以及
    所述获取第一调制参数包括:
    根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数。
  7. 根据权利要求6所述的方法,其特征在于,所述位置参数包括以下至少一种参数:
    人眼与光束在屏幕上的射入点的距离,人眼在屏幕上的投影在所述屏幕的水平方向上的位置,人眼在屏幕上的投影在所述屏幕的竖直方向上的位置。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    获取包括所述第一调制参数在内的N个调制参数与N个屏幕参数的第三对应关系,其中,第n个调制参数是根据所述训练图像经过具有第n个屏幕参数的屏幕成像的畸变像和目标像确定的,所述第n个调制参数与所述第m个屏幕参数对应,所述N个调制参数中的任意两个调制参数的值不同,所述N个屏幕参数中的任意两个屏幕参数的值不同;以及
    所述获取第一调制参数包括:
    根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
  9. 根据权利要求8所述的方法,其特征在于,所述屏幕参数包括以下至少一种参数:
    屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述光学成像***配置于车辆,以及,所述屏幕包括所述车辆的风挡玻璃。
  11. 根据权利要求1至10中任一项所述的方法,其特征在于,所述第一调制参数包括泽尼克系数。
  12. 一种成像装置,包括处理器,所述处理器与存储器耦合,所述存储器用于存储计算机程序或指令,所述处理器用于执行存储器中的所述计算机程序或指令,使得
    权利要求1至11中任一项所述的方法被执行。
  13. 一种光学成像***,包括:
    如权利要求12所述的成像装置;
    空间光调制器;
    至少一个透镜。
  14. 一种光学成像***,其特征在于,包括:
    空间光调制器,用于基于第一调制参数对待成像的第一图像的电信号进行调制以生成所述第一图像的成像光束,所述第一调制参数是根据第一畸变像和第一目标像确定的,所述第一畸变像是训练图像经过所述空间光调制器和屏幕的成像处理而呈现的像,所述第一目标像是所述训练图像的像,且所述第一目标像的畸变在预设范围内;
    至少一个透镜,用于对所述第一图像的成像光束进行折射。
  15. 根据权利要求14所述的***,其特征在于,所述空间光调制器具体用于基于原始调制参数对所述训练图像的电信号进行调制,以获取所述第一畸变像;
    所述光学成像***还包括:
    摄像设备,用于获取所述第一畸变图像;
    所述空间光调制器还用于从所述摄像设备获取所述第一畸变图像,并调节所述原始调制参数,以使所述第一畸变像与所述第一目标像之间的偏差在预设范围内,将经过所述调节的原始调制参数确定为第一调制参数。
  16. 根据权利要求14所述的***,其特征在于,所述光学成像***还包括:
    摄像设备,用于拍摄所述第一畸变像的图像;
    收发器,用于向服务器发送所述第一畸变像的图像,用于从所述服务器接收所述第一 调制参数。
  17. 根据权利要求14至16中任一项所述的***,其特征在于,所述空间光调制器用于获取包括所述第一调制参数在内的K个调制参数与K个图像参数的第一对应关系,其中,第k个调制参数是根据具有第k个图像参数的训练图像的畸变像和目标像确定的,所述第k个调制参数与所述第k个图像参数对应,K≥2,k∈[1,K],所述K个调制参数中的任意两个调制参数的值不同,所述K个图像参数中的任意两个图像参数的值不同,并根据所述第一对应关系,将所述第一图像的图像参数对应的调制参数,确定为所述第一调制参数。
  18. 根据权利要求17所述的***,其特征在于,所述图像参数包括以下至少一种参数:图像大小、图像颜色、图像形状、图像分辨率。
  19. 根据权利要求14至18中任一项所述的***,其特征在于,所述空间光调制器用于获取包括所述第一调制参数在内的M个调制参数与M个位置参数的第二对应关系,所述位置参数用于指示人眼与所述屏幕之间的位置关系,其中,第m调制参数是根据所述训练图像在第m个位置参数下的畸变像和目标像确定的,所述第m个调制参数与所述第m个位置参数对应,所述M个调制参数中的任意两个调制参数的值不同,所述M个位置参数中的任意两个位置参数的值不同,并根据所述第二对应关系,将所述第一图像的位置参数对应的调制参数,确定为所述第一调制参数。
  20. 根据权利要求19所述的***,其特征在于,所述位置参数包括以下至少一种参数:
    人眼与光束在屏幕上的射入点的距离,人眼在屏幕上的投影在所述屏幕的水平方向上的位置,人眼在屏幕上的投影在所述屏幕的竖直方向上的位置。
  21. 根据权利要求14至20中任一项所述的***,其特征在于,所述空间光调制器用于获取包括所述第一调制参数在内的N个调制参数与N个屏幕参数的第三对应关系,其中,第n个调制参数是根据所述训练图像经过具有第n个屏幕参数的屏幕成像的畸变像和目标像确定的,所述第n个调制参数与所述第m个屏幕参数对应,所述N个调制参数中的任意两个调制参数的值不同,所述N个屏幕参数中的任意两个屏幕参数的值不同,并根据所述第三对应关系,将所述第一屏幕的屏幕参数对应的调制参数,确定为所述第一调制参数,其中,所述第一屏幕是用于所述第一图像的成像的屏幕。
  22. 根据权利要求21所述的***,其特征在于,所述屏幕参数包括以下至少一种参数:
    屏幕形状,屏幕厚度,屏幕材质,屏幕折射率,屏幕颜色。
  23. 根据权利要求14至22中任一项所述的***,其特征在于,所述第一调制参数包括泽尼克系数。
  24. 一种车辆,其特征在于,包括:
    如权利要求13所述的光学成像***;或者
    如权利要求14至23中任一项所述的光学成像***。
PCT/CN2021/092925 2020-05-15 2021-05-11 成像方法、成像装置、光学成像***及车辆 WO2021228058A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21802918.9A EP4142275A4 (en) 2020-05-15 2021-05-11 IMAGING METHOD, IMAGING APPARATUS, OPTICAL IMAGING SYSTEM AND VEHICLE
US17/986,483 US20230085082A1 (en) 2020-05-15 2022-11-14 Imaging method, imaging apparatus, optical imaging system, and vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010410182.5A CN113746999B (zh) 2020-05-15 2020-05-15 成像方法、成像装置、光学成像***及车辆
CN202010410182.5 2020-05-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/986,483 Continuation US20230085082A1 (en) 2020-05-15 2022-11-14 Imaging method, imaging apparatus, optical imaging system, and vehicle

Publications (1)

Publication Number Publication Date
WO2021228058A1 true WO2021228058A1 (zh) 2021-11-18

Family

ID=78525272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092925 WO2021228058A1 (zh) 2020-05-15 2021-05-11 成像方法、成像装置、光学成像***及车辆

Country Status (4)

Country Link
US (1) US20230085082A1 (zh)
EP (1) EP4142275A4 (zh)
CN (2) CN114415369A (zh)
WO (1) WO2021228058A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710615A (zh) * 2022-06-08 2022-07-05 中国人民解放军国防科技大学 高效单像素成像方法及***
CN115018711A (zh) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 一种用于仓库调度的图像超分辨率重建方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165429A1 (en) * 2007-03-30 2010-07-01 Light Blue Optics Ltd. Optical systems
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
JP2014199385A (ja) * 2013-03-15 2014-10-23 日本精機株式会社 表示装置及びその表示方法
CN106415696A (zh) * 2013-07-12 2017-02-15 法雷奥舒适驾驶助手公司 产生扫描影像的装置的控制***和方法、影像产生装置和带该***的显示器
CN108061968A (zh) * 2018-01-05 2018-05-22 京东方科技集团股份有限公司 一种抬头显示装置及显示图像校正方法
CN109873997A (zh) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 投影画面校正方法及装置
TW201925856A (zh) * 2017-12-07 2019-07-01 盧森堡商喜瑞爾工業公司 平視顯示器

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070049109A (ko) * 2006-12-04 2007-05-10 실리콘 옵틱스 인코포레이션 파노라마 비전 시스템 및 방법
CN109993713B (zh) * 2019-04-04 2021-11-05 阿波罗智联(北京)科技有限公司 车载平视显示***图像畸变矫正方法和装置
CN110726381B (zh) * 2019-11-22 2021-10-15 中国科学院长春光学精密机械与物理研究所 一种光学自由曲面全频段像差检测***及检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165429A1 (en) * 2007-03-30 2010-07-01 Light Blue Optics Ltd. Optical systems
JP2014199385A (ja) * 2013-03-15 2014-10-23 日本精機株式会社 表示装置及びその表示方法
CN106415696A (zh) * 2013-07-12 2017-02-15 法雷奥舒适驾驶助手公司 产生扫描影像的装置的控制***和方法、影像产生装置和带该***的显示器
CN103792674A (zh) * 2014-01-21 2014-05-14 浙江大学 一种测量和校正虚拟现实显示器畸变的装置和方法
TW201925856A (zh) * 2017-12-07 2019-07-01 盧森堡商喜瑞爾工業公司 平視顯示器
CN108061968A (zh) * 2018-01-05 2018-05-22 京东方科技集团股份有限公司 一种抬头显示装置及显示图像校正方法
CN109873997A (zh) * 2019-04-03 2019-06-11 贵安新区新特电动汽车工业有限公司 投影画面校正方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710615A (zh) * 2022-06-08 2022-07-05 中国人民解放军国防科技大学 高效单像素成像方法及***
CN115018711A (zh) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 一种用于仓库调度的图像超分辨率重建方法
CN115018711B (zh) * 2022-07-15 2022-10-25 成都运荔枝科技有限公司 一种用于仓库调度的图像超分辨率重建方法

Also Published As

Publication number Publication date
CN113746999B (zh) 2023-01-03
EP4142275A1 (en) 2023-03-01
EP4142275A4 (en) 2023-11-01
CN114415369A (zh) 2022-04-29
US20230085082A1 (en) 2023-03-16
CN113746999A (zh) 2021-12-03

Similar Documents

Publication Publication Date Title
US20230085082A1 (en) Imaging method, imaging apparatus, optical imaging system, and vehicle
CA2105446C (en) Stereoscopic display method and apparatus
EP4105877A1 (en) Image enhancement method and image enhancement apparatus
JP5033802B2 (ja) タスク型画像化システム
CN112154379B (zh) 平视显示器
US20200099836A1 (en) Elecronic apparatus, and light field imaging system and method with optical metasurface
US8760516B2 (en) Task-based imaging systems
US20140327771A1 (en) System, method, and computer program product for displaying a scene as a light field
CN108061968A (zh) 一种抬头显示装置及显示图像校正方法
JP2007513427A (ja) 光学システムおよびデジタルシステムの設計を最適化するシステムおよび方法
CN111366557A (zh) 一种基于薄散射介质的相位成像方法
Zhdanov et al. Discomfort of visual perception in virtual and mixed reality systems
US20230298145A1 (en) Massively parallel amplitude-only optical processing system and methods for machine learning
US20220270215A1 (en) Method for applying bokeh effect to video image and recording medium
US20230171385A1 (en) Methods, systems, and computer readable media for hardware-in-the-loop phase retrieval for holographic near eye displays
KR19990068071A (ko) 화상 데이터 발생 방법 및 장치
CN112863453A (zh) 全息显示方法及全息显示***
Evdokimova et al. Meta-Learning Approach in Diffractive Lens Computational Imaging
US20240144431A1 (en) Generating Super Resolution Images using Duo-Camera Artificial Reality Device
US20240184242A1 (en) Data-efficient Photorealistic 3D Holography
CN113938668B (zh) 一种三维光场显示及模型训练方法、装置和存储介质
CN116107083A (zh) 一种显示方法以及显示装置
Dong et al. A study on spatial resolution enhancement using Real-ESRGAN in digital holographic microscopy (DHM)
Schneider et al. Kamera-Verhaltensmodell und Prüfstands-Setups für bildbasierte Fahrerassistenzsysteme
CN116152044A (zh) 图像生成的方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21802918

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021802918

Country of ref document: EP

Effective date: 20221122

NENP Non-entry into the national phase

Ref country code: DE