WO2022242416A1 - 点云数据的生成方法和装置 - Google Patents

点云数据的生成方法和装置 Download PDF

Info

Publication number
WO2022242416A1
WO2022242416A1 PCT/CN2022/088312 CN2022088312W WO2022242416A1 WO 2022242416 A1 WO2022242416 A1 WO 2022242416A1 CN 2022088312 W CN2022088312 W CN 2022088312W WO 2022242416 A1 WO2022242416 A1 WO 2022242416A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud set
pseudo
real
coordinate information
Prior art date
Application number
PCT/CN2022/088312
Other languages
English (en)
French (fr)
Inventor
鞠波
叶晓青
谭啸
孙昊
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to KR1020237008339A priority Critical patent/KR20230042383A/ko
Priority to JP2022561443A priority patent/JP2023529527A/ja
Publication of WO2022242416A1 publication Critical patent/WO2022242416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present disclosure relates to the field of artificial intelligence, in particular to computer vision and deep learning technology, which can be applied to automatic driving and intelligent traffic scenarios.
  • Deep learning technology has achieved great success in the fields of computer vision and natural language processing in recent years.
  • the point cloud 3D target detection task has also become a hot topic for deep learning researchers in recent years.
  • the data collected by the radar is displayed and processed in the form of a point cloud.
  • the disclosure provides a method, device, electronic equipment and storage medium for generating point cloud data.
  • a method for generating point cloud data Collect the real point cloud collection of the target object based on lidar; collect the image of the target object, and generate a pseudo point cloud collection based on the collected image; fuse the real point cloud collection and pseudo point cloud collection to generate for model training
  • the target point cloud collection of This application can make the far and near point clouds in the target point cloud set used for model training more balanced, which can better meet the training requirements, so as to provide the training accuracy of the model and facilitate the monitoring of far and near targets.
  • a device for generating point cloud data is provided.
  • an electronic device is provided.
  • a non-transitory computer readable storage medium is provided.
  • a computer program product is provided.
  • the embodiment of the first aspect of the present disclosure proposes a method for generating point cloud data, including: collecting a real point cloud set of a target object based on lidar; collecting an image of the target object, and based on the collected images to generate a pseudo point cloud set; the real point cloud set and the pseudo point cloud set are fused to generate a target point cloud set for model training.
  • the embodiment of the second aspect of the present disclosure proposes a device for generating point cloud data, including: a real point cloud set acquisition module, which is used to collect the real point cloud set of the target object based on lidar; the pseudo point cloud set The acquisition module is used to collect images of the target object, and based on the collected images, generate a pseudo point cloud set; the point cloud set fusion module is used to fuse the real point cloud set and the pseudo point cloud set , to generate a set of target point clouds for model training.
  • the embodiment of the third aspect of the present disclosure provides an electronic device, including a memory and a processor.
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to implement the method for generating point cloud data according to the embodiment of the first aspect of the present disclosure.
  • the embodiment of the fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to achieve the points described in the embodiment of the first aspect of the present disclosure. How to generate cloud data.
  • the embodiment of the fifth aspect of the present disclosure proposes a computer program product, including a computer program, when the computer program is executed by a processor, it can realize the point cloud data as described in the embodiment of the first aspect of the present disclosure. generation method.
  • FIG. 1 is a schematic diagram of a method for generating point cloud data according to an embodiment of the present disclosure
  • Fig. 2 is an RGB map returned by a forward-looking camera of an automatic driving system according to an embodiment of the present disclosure
  • FIG. 3 is lidar sparse point cloud data corresponding to an RGB image according to an embodiment of the present disclosure
  • Fig. 4 is an RGB map returned by a forward-looking camera of an automatic driving system according to an embodiment of the present disclosure
  • Fig. 5 is the pseudo lidar dense point cloud data corresponding to the RGB image according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a method for obtaining a first point cloud according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of generating a target point cloud set according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of generating a target point cloud set according to an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of obtaining the Euclidean distance from the first point cloud to the real point cloud set according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a method for generating point cloud data according to an embodiment of the present disclosure
  • Fig. 11 is a schematic diagram of an apparatus for generating point cloud data according to an embodiment of the present disclosure
  • FIG. 12 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
  • Image processing is a technology that uses a computer to analyze images to achieve the desired results. Also known as image processing. Image processing generally refers to digital image processing. A digital image refers to a large two-dimensional array obtained by shooting with industrial cameras, video cameras, scanners and other equipment. The elements of this array are called pixels, and their values are called grayscale values. Image processing technology generally includes three parts: image compression, enhancement and restoration, matching, description and recognition.
  • Deep Learning is a new research direction in the field of Machine Learning (ML for short). It is introduced into machine learning to make it closer to the original goal-artificial intelligence. Deep learning is to learn the internal law and representation level of sample data. The information obtained in the learning process is of great help to the interpretation of data such as text, images and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to be able to recognize data such as text, images, and sounds. Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition that far exceed previous related techniques.
  • Computer Vision is a science that studies how to make machines "see”. To put it further, it refers to the use of cameras and computers instead of human eyes to identify, track and measure targets, and further make graphics. Processing, so that the computer processing becomes an image that is more suitable for human observation or sent to the instrument for detection.
  • computer vision studies related theories and technologies, trying to build artificial intelligence systems that can obtain 'information' from images or multidimensional data.
  • the information referred to here refers to information that can be used to help make a "decision” as defined by Shannon. Because perception can be thought of as extracting information from sensory signals, computer vision can also be thought of as the science of how to make artificial systems "perceive" from images or multidimensional data.
  • Artificial Intelligence is a subject that studies certain thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) technology.
  • Artificial intelligence hardware technology generally includes computer vision technology, speech recognition technology, natural language processing technology and its learning/deep learning, big data processing technology, knowledge map technology and other major aspects.
  • Fig. 1 is the flowchart of the generation method of point cloud data according to an embodiment of the present disclosure, as shown in Fig. 1, the generation method of this point cloud data comprises the following steps:
  • Laser detection and ranging system also known as laser radar, consists of a transmitting system, a receiving system, information processing and other parts.
  • LIDAR Light Detection and Ranging
  • a point cloud is simply a number of points scattered in space. Each point contains three-dimensional coordinates (XYZ), laser reflection intensity (Intensity) or color information (Red Green Blue, RGB), and is the laser radar that emits laser light to objects or the ground. The signal is collected from the laser signal reflected by the object or the ground. Through joint calculation and deviation correction, the accurate spatial information of these points can be calculated.
  • the point cloud data obtained by lidar can be used to make digital elevation models, 3D modeling, agricultural and forestry censuses, earthwork calculations, monitoring geological disasters or automatic driving systems.
  • the lidar installed on the automatic driving vehicle can collect point cloud collections of objects and the ground in front of the automatic driving vehicle's field of view as real point cloud collections.
  • an object in front may be used as a target object, such as a vehicle, a pedestrian, or a tree.
  • Figure 2 is the RGB image returned by the forward-looking camera of the automatic driving system
  • Figure 3 is the lidar sparse point cloud data corresponding to the RGB image.
  • the forward-looking camera may include a forward-looking monocular RGB camera or a forward-looking binocular RGB camera.
  • S102 Collect images of the target object, and generate a pseudo point cloud set based on the collected images.
  • dense pseudo point cloud data can be obtained to assist the lidar to collect point cloud data of the target object.
  • pseudo point cloud data can be acquired based on the depth image collected by the depth image acquisition device.
  • the pixel depth of the acquired depth image is back-projected into a 3D point cloud to obtain Pseudo point cloud data.
  • the image acquisition of the target object can be performed based on binocular vision, based on the principle of parallax and using imaging equipment to obtain two images of the object under test from different positions, and by calculating the position deviation between corresponding points of the image, a pseudo point cloud data.
  • the image acquisition of the target object can be performed based on monocular vision, the relationship between the rotation and translation between the acquired images can be calculated, and the pseudo point cloud data can be obtained through the calculation based on the triangulation of matching points.
  • a forward-looking monocular RGB camera or a forward-looking binocular RGB camera can be used to collect point clouds of objects and the ground in front of the field of view of the autonomous driving vehicle as a collection of pseudo point clouds .
  • Figure 4 is the RGB image returned by the forward-looking camera of the automatic driving system
  • Figure 5 is the pseudo-lidar dense point cloud data corresponding to the RGB image.
  • the forward-looking camera may include a forward-looking monocular RGB camera or a forward-looking binocular RGB camera.
  • the obtained real point cloud set and pseudo point cloud set are fused to obtain the target point cloud set. Since the pseudo point cloud set has a large amount of data, the real point cloud can be compared with the dense pseudo point cloud set. The set is added to the point cloud, so that the far and near point clouds in the target point cloud set used for model training are more balanced, which can better meet the training requirements, so as to provide the training accuracy of the model and facilitate the monitoring of far and near targets.
  • the embodiment of the present application provides a method for generating point cloud data, which collects the real point cloud collection of the target object based on the laser radar; collects the image of the target object, and generates a pseudo point cloud collection based on the collected image; The cloud set and the pseudo point cloud set are fused to generate the target point cloud set for model training.
  • This application can make the far and near point clouds in the target point cloud set used for model training more balanced, which can better meet the training requirements, so as to provide the training accuracy of the model and facilitate the monitoring of far and near targets.
  • Fig. 6 is a flow chart of a method for generating point cloud data according to an embodiment of the present disclosure. As shown in Fig. 6 , before the real point cloud set and the pseudo point cloud set are fused to generate the target point cloud set for model training , including the following steps:
  • the ground equation is calculated.
  • the method for obtaining the surface equation may be a singular value decomposition (Singular Value Decomposition, SVD) method. After the ground equation is obtained, each point cloud in the pseudo point cloud set is used as the first point cloud, and the ground distance between each first point cloud and the ground equation is obtained according to the coordinate information of each first point cloud.
  • SVD singular Value Decomposition
  • a distance threshold is set, and if the ground distance between the first point cloud and the ground equation in the pseudo point cloud set is smaller than the set distance threshold, the first point cloud is removed from the pseudo point cloud set.
  • the distance threshold as 10 as an example, the first point cloud whose ground distance between the first point cloud and the ground equation in the pseudo point cloud set is less than 10 is removed from the pseudo point cloud set.
  • the ground point cloud is removed from the false point cloud set, reducing a large amount of invalid point cloud data, thereby reducing the calculation amount of the target detection model, and increasing the robustness and accuracy of the target detection model.
  • Fig. 7 is a flowchart of a method for generating point cloud data according to an embodiment of the present disclosure.
  • the real point cloud set and the pseudo point cloud set are fused to generate a target point cloud set for model training, which also includes the following steps:
  • the splicing of point clouds can be understood as the process of obtaining a perfect coordinate transformation through calculation, and integrating the point cloud data under different viewing angles into the specified coordinate system through rigid transformations such as rotation and translation.
  • the method based on local feature description can be used to stitch the real point cloud set and the pseudo point cloud set: by extracting the neighborhood geometric features of each point cloud in the real point cloud set and the pseudo point cloud set , quickly determine the corresponding relationship between the two points through the geometric features, and then calculate this relationship to obtain the transformation matrix.
  • the geometric features of point clouds include many kinds, and the more common ones are Fast Point Feature Histgrams (FPFH).
  • the accurate registration method can be used to stitch the real point cloud collection and the pseudo point cloud collection: accurate registration is to use the known initial transformation matrix, through the iterative closest point algorithm (Iterative Closest Point, ICP) and other calculations to obtain a more accurate solution.
  • ICP iterative Closest Point
  • the ICP algorithm calculates the distance between the corresponding points in the real point cloud set and the pseudo point cloud set, constructs a rotation and translation matrix, transforms the real point cloud set through the rotation and translation matrix, and calculates the mean square error of the transformed point cloud set.
  • the algorithm ends. Otherwise, continue to repeat iterations until the error meets the threshold condition or the number of iterations is terminated.
  • Each point cloud in the real point cloud collection collected by the lidar is used as the second point cloud, and the center point coordinates of the real point cloud collection can be determined according to the coordinate information of all the second point clouds. Calculate the Euclidean distance between the coordinate information of each first point cloud in the pseudo point cloud set and the determined center point coordinates of the real point cloud set, and obtain the distance between each first point cloud and the center point coordinates of the real point cloud set Euclidean distance.
  • the real point cloud set and the pseudo point cloud set are spliced to generate the candidate point cloud set, there are many point cloud data, which will cause a large amount of calculation. In order to reduce the amount of calculation, it can be calculated according to each first point cloud
  • the Euclidean distance of the central point coordinates of the set, part of the point cloud data in the candidate point cloud set is removed, and the point cloud set after part of the point cloud data is removed is used as the target point cloud set.
  • a down-sampling method may be used to remove part of the point cloud data in the candidate point cloud set.
  • the real point cloud set and the pseudo point cloud set are spliced to increase the accuracy of the target detection model, and the point cloud is selected from the candidate point cloud set as the target point cloud set instead of using all point cloud data. The amount of calculation is reduced.
  • FIG. 8 is an exemplary schematic diagram of selecting a point cloud from a candidate point cloud set based on the Euclidean distance of the first point cloud to generate a target point cloud set, as shown in FIG. 8 , including the following steps :
  • a retention probability can be configured for each first point cloud in the pseudo point cloud set according to the Euclidean distance from the first point cloud to the real point cloud set.
  • the retention probability of the first point cloud configuration with the larger Euclidean distance of the point cloud set is greater, and the retention probability of the first point cloud configuration with smaller Euclidean distance from the real point cloud set is smaller.
  • the retention probability of the first point cloud configuration with the largest Euclidean distance from the real point cloud set may be 0.98
  • the retention probability of the first point cloud configuration with the smallest Euclidean distance from the real point cloud set may be 0.22.
  • a retention probability may be pre-configured for each second point cloud in the real point cloud collection collected by the lidar.
  • the second point cloud in the real point cloud set can be preconfigured to be close to 1 Or a retention probability equal to 1. For example, a retention probability of 0.95 may be uniformly pre-configured for the second point cloud in the set of real point clouds.
  • each first point cloud and second point cloud can be Cloud retention probability, part of the point cloud data in the candidate point cloud set is removed, and the point cloud set after part of the point cloud data is removed is used as the target point cloud set.
  • a random downsampling method may be used to remove part of the point cloud data in the candidate point cloud set generated by splicing the real point cloud set and the pseudo point cloud set, wherein the probability used for random downsampling is the reserved probability.
  • the probability used for random downsampling is the reserved probability.
  • random downsampling is performed on the candidate point cloud set according to the retention probability of each first point cloud and the second point cloud, which reduces the amount of calculation, and at the same time makes the far and near points in the target point cloud set used for model training
  • the point cloud is more balanced, which can better meet the training requirements.
  • FIG. 9 is a flowchart of a method for generating point cloud data according to an embodiment of the present disclosure. As shown in FIG. 9 , based on the coordinate information of each first point cloud in the pseudo point cloud set and the coordinate information of each second point cloud in the real point cloud set, and obtain the Euclidean distance from the first point cloud to the real point cloud set, including the following steps:
  • the coordinate information of each second point cloud in the real point cloud set is obtained, and the center point coordinate information of the real point cloud set is determined according to the coordinate information of all the second point clouds.
  • the coordinate information of all the second point clouds can be averaged to obtain an average coordinate information, and this average coordinate information can be used as the center of the real point cloud set Point coordinate information.
  • the mass point coordinate information of the real point cloud set when acquiring the center point coordinates of the real point cloud set, can be calculated, and the mass point coordinate information is used as the center point coordinate information of the real point cloud set.
  • the Euclidean distance from the first point cloud to the coordinates of the center point is determined, which lays the foundation for the retention probability configuration of the first point cloud and facilitates calculation and reduce the amount of computation.
  • FIG. 10 is a flow chart of a method for generating point cloud data according to an embodiment of the present disclosure. As shown in FIG. 10 , the method for generating point cloud data includes the following steps:
  • the embodiment of the present application provides a method for generating point cloud data, by collecting the real point cloud collection of the target object based on the laser radar; the image collection device collects the image of the target object, and generates a pseudo point cloud collection based on the collected image ; Fuse the real point cloud set and the pseudo point cloud set to generate the target point cloud set for model training.
  • This application can make the far and near point clouds in the target point cloud set used for model training more balanced, which can better meet the training requirements, so as to provide the training accuracy of the model and facilitate the monitoring of far and near targets.
  • Fig. 11 is a structural diagram of an apparatus 1100 for generating point cloud data according to an embodiment of the present disclosure.
  • the generation device 1100 of point cloud data comprises:
  • the real point cloud set acquisition module 1101 is used to collect the real point cloud set of the target object based on lidar;
  • Pseudo-point cloud set acquisition module 1102 used for image acquisition of the target object, and based on the collected image, generate a pseudo-point cloud set;
  • the point cloud set fusion module 1103 is configured to fuse the real point cloud set and the pseudo point cloud set to generate a target point cloud set for model training.
  • the embodiment of the present application provides a point cloud data generation device, which collects the real point cloud collection of the target object based on the laser radar; the image collection device collects the image of the target object, and generates a pseudo point cloud collection based on the collected image ; Fuse the real point cloud set and the pseudo point cloud set to generate the target point cloud set for model training.
  • This application can make the far and near point clouds in the target point cloud set used for model training more balanced, which can better meet the training requirements, so as to provide the training accuracy of the model and facilitate the monitoring of far and near targets.
  • the point cloud set fusion module 1103 is specifically configured to: obtain the first point cloud and ground The ground distance of the equation; remove the first point cloud whose ground distance is less than the set distance threshold from the pseudo point cloud set.
  • the point cloud set fusion module 1103 is also used to: splice the real point cloud set and the pseudo point cloud set to generate a candidate point cloud set; based on the pseudo point cloud The coordinate information of each first point cloud in the set and the coordinate information of each second point cloud in the real point cloud set are obtained to obtain the Euclidean distance from the first point cloud to the real point cloud set; based on the Euclidean distance of the first point cloud, Point clouds are selected from the candidate point cloud set to generate the target point cloud set.
  • the point cloud set fusion module 1103 is also configured to: generate the retention probability of the first point cloud based on the Euclidean distance of the first point cloud; obtain the second point cloud Pre-configured retention probability; randomly downsampling the candidate point cloud set to obtain the target point cloud set, where the probability used for random downsampling is the retention probability.
  • the point cloud set fusion module 1103 is also used to: obtain the coordinate information of the second point cloud, obtain the center point coordinate information of the real point cloud set; The coordinate information of the point cloud and the coordinate information of the center point determine the Euclidean distance.
  • the device 1100 for generating point cloud data further includes: a model training module 1104, configured to use the set of target point clouds to train the constructed 3D target detection model to generate A trained 3D object detection model.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • FIG. 12 shows a schematic block diagram of an example electronic device 1200 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 1200 includes a computing unit 1201 that can execute according to a computer program stored in a read-only memory (ROM) 1202 or loaded from a storage unit 1208 into a random-access memory (RAM) 1203. Various appropriate actions and treatments. In the RAM 1203, various programs and data necessary for the operation of the device 1200 can also be stored.
  • the computing unit 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204.
  • An input/output (I/O) interface 1205 is also connected to the bus 1204 .
  • the I/O interface 1205 includes: an input unit 1206, such as a keyboard, a mouse, etc.; an output unit 1207, such as various types of displays, speakers, etc.; a storage unit 1208, such as a magnetic disk, an optical disk, etc. ; and a communication unit 1209, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 1209 allows the device 1200 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 1201 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 1201 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 1201 executes various methods and processes described above, such as a method for generating point cloud data.
  • the method for generating point cloud data may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1208.
  • part or all of the computer program may be loaded and/or installed on the device 1200 via the ROM 1202 and/or the communication unit 1209.
  • the computer program When the computer program is loaded into the RAM 1203 and executed by the computing unit 1201, one or more steps of the method for generating point cloud data described above can be performed.
  • the computing unit 1201 may be configured in any other appropriate way (for example, by means of firmware) to execute the method for generating point cloud data.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: local area networks (LANs), wide area networks (WANs), the Internet, and blockchain networks.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as a cloud computing server or a cloud host. ), there are defects such as high management difficulty and weak business scalability.
  • the server can also be a server of a distributed system, or a server combined with a block chain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本公开公开了一种点云数据的生成方法和装置,涉及人工智能领域,具体涉及计算机视觉和深度学习技术,可应用于自动驾驶和智能交通场景下。具体实现方案为:基于激光雷达采集目标对象的真实点云集合;对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。

Description

点云数据的生成方法和装置
相关申请的交叉引用
本公开要求于2021年5月21日提交的中国专利申请号“202110556351.0”的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及人工智能领域,具体涉及计算机视觉和深度学习技术,可应用于自动驾驶和智能交通场景下。
背景技术
深度学习技术近年来在计算机视觉和自然语言处理领域获得了巨大的成功,点云3D目标检测任务作为计算机视觉中的经典子任务,近年来也成为了深度学习研究者的热点课题,通常由激光雷达采集的数据是以点云的形式显示和处理的。
发明内容
本公开提供了一种点云数据的生成方法、装置、电子设备及存储介质。
根据本公开的一方面,提供了一种点云数据的生成方法。在基于激光雷达采集目标对象的真实点云集合;对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。本申请能够使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求,以便于提供模型的训练精度,有利于远近目标的监测。
根据本公开的另一方面,提供了一种点云数据的生成装置。
根据本公开的另一方面,提供了一种电子设备。
根据本公开的另一方面,提供了一种非瞬时计算机可读存储介质。
根据本公开的另一方面,提供了一种计算机程序产品。
为达上述目的,本公开第一方面实施例提出了一种点云数据的生成方法,包括:基于激光雷达采集目标对象的真实点云集合;对所述目标对象 进行图像采集,并基于采集的图像,生成伪点云集合;对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合。
为达上述目的,本公开第二方面实施例提出了一种点云数据的生成装置,包括:真实点云集合获取模块,用于基于激光雷达采集目标对象的真实点云集合;伪点云集合获取模块,用于对所述目标对象进行图像采集,并基于采集的图像,生成伪点云集合;点云集合融合模块,用于对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合。
为达上述目的,本公开第三方面实施例提出了一种电子设备,包括存储器、处理器。所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以实现如本公开第一方面实施例所述的点云数据的生成方法。
为达上述目的,本公开第四方面实施例提出了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于实现如本公开第一方面实施例所述的点云数据的生成方法。
为达上述目的,本公开第五方面实施例提出了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时以实现如本公开第一方面实施例所述的点云数据的生成方法。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是根据本公开一实施例的点云数据的生成方法的示意图;
图2是根据本公开一实施例的自动驾驶***前视相机返回的RGB图;
图3是根据本公开一实施例的RGB图对应的激光雷达稀疏点云数据;
图4是根据本公开一实施例的自动驾驶***前视相机返回的RGB图,;
图5是根据本公开一实施例的RGB图对应的伪激光雷达稠密点云数据;
图6是根据本公开一实施例的获取第一点云方法的示意图;
图7是根据本公开一实施例的生成目标点云集合的示意图;
图8是根据本公开一实施例的生成目标点云集合的示意图;
图9是根据本公开一实施例的获取第一点云到真实点云集合的欧式距离的示意图;
图10是根据本公开一实施例的点云数据的生成方法的示意图;
图11是根据本公开一实施例的点云数据的生成装置的示意图;
图12是根据本公开一实施例的电子设备的示意图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
图像处理(Image Processing),用计算机对图像进行分析,以达到所需结果的技术。又称影像处理。图像处理一般指数字图像处理。数字图像是指用工业相机、摄像机、扫描仪等设备经过拍摄得到的一个大的二维数组,该数组的元素称为像素,其值称为灰度值。图像处理技术一般包括图像压缩,增强和复原,匹配、描述和识别3个部分。
深度学习(Deep Learning,简称DL),是机器学习(Machine Learning,简称ML)领域中一个新的研究方向,它被引入机器学习使其更接近于最初的目标——人工智能。深度学习是学习样本数据的内在律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。深度学习是一个复杂的机器学习算法,在语音和图像识别方面取得的效果,远远超过先前相关技术。
计算机视觉(Computer Vision),是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察 或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取‘信息’的人工智能***。这里所指的信息指Shannon定义的,可以用来帮助做一个“决定”的信息。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工***从图像或多维数据中“感知”的科学。
人工智能(Artificial Intelligence,简称AI),是研究使计算机来模拟人生的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术,也有软件层面的技术。人工智能硬件技术一般包括计算机视觉技术、语音识别技术、自然语言处理技术以及及其学习/深度学习、大数据处理技术、知识图谱技术等几大方面。
图1是根据本公开一个实施例的点云数据的生成方法的流程图,如图1所示,该点云数据的生成方法包括以下步骤:
S101,基于激光雷达采集目标对象的真实点云集合。
激光探测及测距***(Light Detection and Ranging,LiDAR),也称为激光雷达,由发射***、接收***、信息处理等部分组成。LIDAR每秒钟能产生十万、百万甚至千万数量级别的点,称之为点云(point cloud)。点云简单来说就是空间中散布的多个点,每个点包含三维坐标(XYZ)、激光反射强度(Intensity)或者颜色信息(Red Green Blue,RGB),是激光雷达向物体或者地面发射激光信号,然后收集物体或者地面反射的激光信号而来的,通过联合解算、偏差校正,便可以计算出这些点的准确空间信息。其中,激光雷达所获得的点云数据可用于制作数字高程模型、三维建模、农林普查、土方计算、监测地质灾害或者自动驾驶等***中。
在一些实施例中,以激光雷达运用在自动驾驶***上为例,安装在自动驾驶汽车上的激光雷达,可采集自动驾驶汽车视野前方物体及地面的点云集合,作为真实点云集合。其中,前方物体可作为目标对象,比如车辆、行人或者树木等。作为示例,图2是自动驾驶***前视相机返回的RGB图,图3是该RGB图对应的激光雷达稀疏点云数据。在一些实施例中,前视相机可包括前视单目RGB相机或者前视双目RGB相机。
S102,对目标对象进行图像采集,并基于采集的图像,生成伪点云集合。
本申请实施例中可以获取稠密的伪点云数据来辅助激光雷达对目标对象进行点云数据的采集。
在一些实施例中,可以从深度图像采集装置采集的深度图像,基于深度图像获取到伪点云数据,在一些实施例中,将采集到的深度图像的像素深度反投影为3D点云,得到伪点云数据。
在一些实施例中,可以基于双目视觉对目标对象进行图像采集,基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,得到伪点云数据。
在一些实施例中,可以基于单目视觉对目标对象进行图像采集,计算得到采集图像之间旋转和平移之间的关系,通过基于匹配点的三角化的计算,得到伪点云数据。
在一些实施例中,以运用在自动驾驶***上为例,可运用前视单目RGB相机或者前视双目RGB相机,采集自动驾驶汽车视野前方物体及地面的点云,作为伪点云集合。作为示例,图4是自动驾驶***前视相机返回的RGB图,图5是该RGB图对应的伪激光雷达稠密点云数据。在一些实施例中,前视相机可包括前视单目RGB相机或者前视双目RGB相机。
S103,对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。
由于激光雷达获取到的点云数据中,离激光雷达越近的点云越稠密,离激光雷达越远的点云越稀疏,导致近处的检测效果会比较好,离激光雷达越远检测效果会有极大的衰减。为了避免此问题,将获取到的真实点云集合和伪点云集合进行融合,得到目标点云集合,由于伪点云集合的数据量较大,可以通过稠密的伪点云集合对真实点云集合进行点云补充,使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求,以便于提供模型的训练精度,有利于远近目标的监测。
本申请实施例提供了一种点云数据的生成方法,通过基于激光雷达采集目标对象的真实点云集合;对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。本申请能够使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求,以便于提供模型的训 练精度,有利于远近目标的监测。
在上述实施例的基础之上,由于伪点云数据数量稠密,较多的伪点云数据进行融合,会导致模型训练的运算量较大,而且影响模型的准确性,因此,在对真实点云集合和伪点云集合进行融合之前,还需要对伪点云集合中的第一点云进行过滤。图6是根据本公开一个实施例的点云数据的生成方法的流程图,如图6所示,对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合之前,包括以下步骤:
S601,基于伪点云集合中每个第一点云的坐标信息,获取第一点云与地面方程的地面距离。
根据伪点云集合中的所有点云数据,计算得到地面方程。在一些实施例中,获取地面方程的方法可以为奇异值分解(Singular Value Decomposition,SVD)方法。在得到地面方程之后,将伪点云集合中的每个点云作为第一点云,根据每个第一点云的坐标信息,获取每个第一点云与地面方程的地面距离。
S602,从伪点云集合中剔除地面距离小于设定距离阈值的第一点云。
其中,在伪点云集合中,存在大量地面点云数据以及距离地面较近的点云数据,这些数据对于目标检测***的训练检测是无效的,反而会增加***的计算量。因此,设定一个距离阈值,在伪点云集合中存在第一点云与地面方程的地面距离小于设定距离阈值的情况下,将该第一点云从伪点云集合中剔除。以距离阈值为10为例,将伪点云集合中第一点云与地面方程的地面距离小于10的第一点云从伪点云集合中剔除。
本申请实施例将地面点云从伪点云集合中剔除,减少了大量无效的点云数据,从而降低了目标检测模型的计算量,增加了目标检测模型的鲁棒性和准确性。
图7是根据本公开一个实施例的点云数据的生成方法的流程图。在上述实施例基础之上,如图7所示,对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合,还包括以下步骤:
S701,将真实点云集合和伪点云集合进行拼接,生成候选点云集合。
为了得到更精准的目标检测模型,需要将真实点云集合和伪点云集合进行拼接,将拼接后的点云集合作为候选点云集合。其中,点云的拼接可 以理解为:通过计算得到完美的坐标变换,将处于不同视角下的点云数据经过旋转平移等刚性变换统一整合到指定坐标系之下的过程。
作为一种可实现的方式,将真实点云集合和伪点云集合进行拼接可采用基于局部特征描述的方法:通过提取真实点云集合和伪点云集合中每个点云的邻域几何特征,通过几何特征快速确定二者之间的点对的对应关系,再计算此关系进而获得变换矩阵。其中,点云的几何特征包括了很多种,比较常见的为快速点特征直方图(Fast Point Feature Histgrams,FPFH)。
作为另一种可实现的方式,将真实点云集合和伪点云集合进行拼接可采用精确配准法:精确配准是利用已知的初始变换矩阵,通过迭代最近点算法(Iterative Closest Point,ICP)等计算得到较为精确的解。ICP算法通过计算真实点云集合和伪点云集合中的对应点距离,构造旋转平移矩阵,通过旋转平移矩阵对真实点云集合进行变换,计算变换之后点云集合的均方差。在均方差满足阈值条件的情况下,则算法结束。否则继续重复迭代直至误差满足阈值条件或者迭代次数终止。
S702,基于伪点云集合中每个第一点云的坐标信息和真实点云集合中每个第二点云的坐标信息,获取第一点云到真实点云集合的欧式距离。
将激光雷达采集的真实点云集合中的每个点云作为第二点云,根据所有第二点云的坐标信息,可确定出真实点云集合的中心点坐标。将伪点云集合中每个第一点云的坐标信息与所确定的真实点云集合的中心点坐标进行欧氏距离计算,得到每个第一点云到真实点云集合的中心点坐标的欧式距离。
S703,基于第一点云的欧式距离,从候选点云集合中选取点云,以生成目标点云集合。
由于真实点云集合和伪点云集合进行拼接生成候选点云集合中,点云数据较多,会造成计算量较大,为了减小计算量,可以根据每个第一点云到真实点云集合的中心点坐标的欧式距离,对候选点云集合中的点云数据进行一部分去除,将进行一部分点云数据去除后的点云集合作为目标点云集合。在一些实施例中,对候选点云集合中的点云数据进行一部分去除可采用降采样方法。
本申请实施例将真实点云集合和伪点云集合进行拼接,增加了目标检 测模型的精确性,从候选点云集合中选取点云作为目标点云集合,而非采用所有的点云数据,降低了计算量。
作为一种可能的实现方式,图8为基于第一点云的欧式距离,从候选点云集合中选取点云,以生成目标点云集合的示例性示意图,如图8所示,包括以下步骤:
S801,基于第一点云的欧式距离,生成第一点云的保留概率。
以自动驾驶为例,为了降低计算量,可根据第一点云到真实点云集合的欧式距离,对伪点云集合中的每个第一点云都配置一个保留概率。在对每个第一点云配置保留概率时,考虑到在自动驾驶的前方目标检测中,对检测结果影响比较明显的是场景中远处的物体,为了提升对远处物体的检测,对距离真实点云集合的欧式距离越大的第一点云配置的保留概率越大,对距离真实点云集合的欧式距离越小的第一点云配置的保留概率越小。比如说,对距离真实点云集合的欧式距离最大的第一点云配置的保留概率可为0.98,对距离真实点云集合的欧式距离最小的第一点云配置的保留概率可为0.22。
S802,获取第二点云预先配置的保留概率。
为了降低计算量,可对激光雷达采集的真实点云集合中的每个第二点云预先配置保留概率。
在一些实施例中,由于真实点云集合中的第二点云相较伪点云集合种的第一点云更稀疏,可统一对真实点云集合中的第二点云预先配置接近于1或者等于1的保留概率。比如说,可统一对真实点云集合中的第二点云预先配置0.95的保留概率。
S803,对候选点云集合进行随机降采样,得到目标点云集合,其中,随机降采样使用的概率为保留概率。
由于真实点云集合和伪点云集合进行拼接生成候选点云集合中,点云数据较多,会造成计算量较大,为了减小计算量,可以根据每个第一点云和第二点云的保留概率,对候选点云集合中的点云数据进行一部分去除,将进行一部分点云数据去除后的点云集合作为目标点云集合。
在一些实施例中,对真实点云集合和伪点云集合拼接生成的候选点云集合中的点云数据进行一部分去除可采用随机降采样方法,其中,随机降 采样使用的概率为保留概率。通过保留概率对候选点云集合进行随机降采样,可以使得能代表目标对象的有效点云保留下来,可以最大程度剔除同一处聚集过多代表同样意义的点云,使得目标点云集合中的点云的近处和远处的点云数据量都适中且能有效代表目标对象。
本申请实施例通过根据每个第一点云和第二点云的保留概率对候选点云集合进行随机降采样,减小了计算量,同时使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求。
在上述实施例的基础之上,图9是根据本公开一个实施例的点云数据的生成方法的流程图,如图9所示,基于伪点云集合中每个第一点云的坐标信息和真实点云集合中每个第二点云的坐标信息,获取第一点云到真实点云集合的欧式距离,包括以下步骤:
S901,获取第二点云的坐标信息,获取真实点云集合的中心点坐标信息。
获取真实点云集合中每个第二点云的坐标信息,并根据所有第二点云的坐标信息,确定真实点云集合的中心点坐标信息。
在一些实施例中,在获取真实点云集合的中心点坐标时,可将所有第二点云的坐标信息进行平均运算,获得一个平均坐标信息,将此平均坐标信息作为真实点云集合的中心点坐标信息。
在一些实施例中,在获取真实点云集合的中心点坐标时,可计算出真实点云集合的质点坐标信息,将此质点坐标信息作为真实点云集合的中心点坐标信息。
S902,基于第一点云的坐标信息和中心点坐标信息,确定欧式距离。
根据上述所确定的真实点云集合的中心点坐标信息,计算伪点云集合中的每个第一点云到该中心点坐标的欧式距离。
本申请实施例中,基于第一点云的坐标信息和中心点坐标信息,确定第一点云到该中心点坐标的欧式距离,为对第一点云进行保留概率配置打下了基础,方便运算并减小了计算量。
图10是根据本公开一个实施例的点云数据的生成方法的流程图,如图10所示,该点云数据的生成方法包括以下步骤:
S1001,基于激光雷达采集目标对象的真实点云集合;
S1002,对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;
关于步骤S1001~S1002,上述实施例已做具体介绍,在此不再进行赘述。
S1003,基于伪点云集合中每个第一点云的坐标信息,获取第一点云与地面方程的地面距离;
S1004,从伪点云集合中剔除地面距离小于设定距离阈值的第一点云。
关于步骤S1003~S1004,上述实施例已做具体介绍,在此不再进行赘述。
S1005,将真实点云集合和伪点云集合进行拼接,生成候选点云集合;
S1006,获取第二点云的坐标信息,获取真实点云集合的中心点坐标信息;
S1007,基于第一点云的坐标信息和中心点坐标信息,确定欧式距离。
S1008,基于第一点云的欧式距离,生成第一点云的保留概率;
S1009,获取第二点云预先配置的保留概率;
S1010,对候选点云集合进行随机降采样,得到目标点云集合,其中,随机降采样使用的概率为保留概率。
关于步骤S1005~S1010,上述实施例已做具体介绍,在此不再进行赘述。
S1011,利用目标点云集合,训练构建的3D目标检测模型,以生成训练好的3D目标检测模型。
本申请实施例提供了一种点云数据的生成方法,通过基于激光雷达采集目标对象的真实点云集合;采集图像采集装置对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。本申请能够使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求,以便于提供模型的训练精度,有利于远近目标的监测。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
图11是根据本公开一个实施例的点云数据的生成装置1100的结构图。 如图11所示,点云数据的生成装置1100包括:
真实点云集合获取模块1101,用于基于激光雷达采集目标对象的真实点云集合;
伪点云集合获取模块1102,用于对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;
点云集合融合模块1103,用于对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。
需要说明的是,前述对点云数据的生成方法实施例的解释说明也适用于本申请的点云数据的生成装置,此处不再赘述。
本申请实施例提供了一种点云数据的生成装置,通过基于激光雷达采集目标对象的真实点云集合;采集图像采集装置对目标对象进行图像采集,并基于采集的图像,生成伪点云集合;对真实点云集合和伪点云集合进行融合,生成用于模型训练的目标点云集合。本申请能够使得用于模型训练的目标点云集合中远近的点云较为均衡,可以更好地满足训练要求,以便于提供模型的训练精度,有利于远近目标的监测。
进一步地,在本公开实施例一种可能的实现方式中,点云集合融合模块1103,具体用于:基于伪点云集合中每个第一点云的坐标信息,获取第一点云与地面方程的地面距离;从伪点云集合中剔除地面距离小于设定距离阈值的第一点云。
进一步地,在本公开实施例一种可能的实现方式中,点云集合融合模块1103,还用于:将真实点云集合和伪点云集合进行拼接,生成候选点云集合;基于伪点云集合中每个第一点云的坐标信息和真实点云集合中每个第二点云的坐标信息,获取第一点云到真实点云集合的欧式距离;基于第一点云的欧式距离,从候选点云集合中选取点云,以生成目标点云集合。
进一步地,在本公开实施例一种可能的实现方式中,点云集合融合模块1103,还用于:基于第一点云的欧式距离,生成第一点云的保留概率;获取第二点云预先配置的保留概率;对候选点云集合进行随机降采样,得到目标点云集合,其中,随机降采样使用的概率为保留概率。
进一步地,在本公开实施例一种可能的实现方式中,点云集合融合模块1103,还用于:获取第二点云的坐标信息,获取真实点云集合的中心点 坐标信息;基于第一点云的坐标信息和中心点坐标信息,确定欧式距离。
进一步地,在本公开实施例一种可能的实现方式中,点云数据的生成装置1100,还包括:模型训练模块1104,用于利用目标点云集合,训练构建的3D目标检测模型,以生成训练好的3D目标检测模型。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图12示出了可以用来实施本公开的实施例的示例电子设备1200的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图12所示,设备1200包括计算单元1201,其可以根据存储在只读存储器(ROM)1202中的计算机程序或者从存储单元1208加载到随机访问存储器(RAM)1203中的计算机程序,来执行各种适当的动作和处理。在RAM 1203中,还可存储设备1200操作所需的各种程序和数据。计算单元1201、ROM 1202以及RAM 1203通过总线1204彼此相连。输入/输出(I/O)接口1205也连接至总线1204。
设备1200中的多个部件连接至I/O接口1205,包括:输入单元1206,例如键盘、鼠标等;输出单元1207,例如各种类型的显示器、扬声器等;存储单元1208,例如磁盘、光盘等;以及通信单元1209,例如网卡、调制解调器、无线通信收发机等。通信单元1209允许设备1200通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元1201可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元1201的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元1201执行上文所描述的各个方法和处理,例如点云数据的生成方法。例如,在一些实施例中,点云数据的生成方法可被实现为计算机软件程序,其被有形地包含于机器可读介 质,例如存储单元1208。在一些实施例中,计算机程序的部分或者全部可以经由ROM 1202和/或通信单元1209而被载入和/或安装到设备1200上。在计算机程序加载到RAM 1203并由计算单元1201执行的情况下,可以执行上文描述的点云数据的生成方法的一个或多个步骤。备选地,在其他实施例中,计算单元1201可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行点云数据的生成方法。
本文中以上描述的***和技术的各种实施方式可以在数字电子电路***、集成电路***、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上***的***(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程***上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储***、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储***、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的***和技术, 该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的***和技术实施在包括后台部件的计算***(例如,作为数据服务器)、或者包括中间件部件的计算***(例如,应用服务器)、或者包括前端部件的计算***(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的***和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算***中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将***的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、互联网和区块链网络。
计算机***可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务端可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决了传统物理主机与VPS服务(“Virtual Private Server”,或简称“VPS”)中,存在的管理难度大,业务扩展性弱的缺陷。服务器也可以为分布式***的服务器,或者是结合区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (15)

  1. 一种点云数据的生成方法,包括:
    基于激光雷达采集目标对象的真实点云集合;
    对所述目标对象进行图像采集,并基于采集的图像,生成伪点云集合;
    对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合。
  2. 根据权利要求1所述的方法,其中,所述对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合,还包括:
    基于所述伪点云集合中每个第一点云的坐标信息,获取所述第一点云与地面方程的地面距离;
    从所述伪点云集合中剔除所述地面距离小于设定距离阈值的第一点云。
  3. 根据权利要求1或2所述的方法,其中,所述对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合,还包括:
    将所述真实点云集合和所述伪点云集合进行拼接,生成候选点云集合;
    基于所述伪点云集合中每个第一点云的坐标信息和所述真实点云集合中每个第二点云的坐标信息,获取所述第一点云到所述真实点云集合的欧式距离;
    基于所述第一点云的欧式距离,从所述候选点云集合中选取点云,以生成所述目标点云集合。
  4. 根据权利要求3所述的方法,其中,所述基于所述第一点云的欧式距离,从所述候选点云集合中选取点云,以生成所述目标点云集合,包括:
    基于所述第一点云的欧式距离,生成所述第一点云的保留概率;
    获取所述第二点云预先配置的保留概率;
    对所述候选点云集合进行随机降采样,得到所述目标点云集合,其中,所述随机降采样使用的概率为所述保留概率。
  5. 根据权利要求3所述的方法,其中,所述基于所述伪点云集合中每个第一点云的坐标信息和所述真实点云集合中每个第二点云的坐标信息,获取所述第一点云到所述真实点云集合的欧式距离,包括:
    获取所述第二点云的坐标信息,获取所述真实点云集合的中心点坐标信息;
    基于所述第一点云的坐标信息和所述中心点坐标信息,确定所述欧式距离。
  6. 根据权利要求1所述的方法,其中,所述生成用于模型训练的目标点云集合进一步包括:
    利用所述目标点云集合,训练构建的3D目标检测模型,以生成训练好的3D目标检测模型。
  7. 一种点云数据的生成装置,包括:
    真实点云集合获取模块,用于基于激光雷达采集目标对象的真实点云集合;
    伪点云集合获取模块,用于对所述目标对象进行图像采集,并基于采集的图像,生成伪点云集合;
    点云集合融合模块,用于对所述真实点云集合和所述伪点云集合进行融合,生成用于模型训练的目标点云集合。
  8. 根据权利要求7所述的装置,其中,所述点云集合融合模块,用于:
    基于所述伪点云集合中每个第一点云的坐标信息,获取所述第 一点云与地面方程的地面距离;
    从所述伪点云集合中剔除所述地面距离小于设定距离阈值的第一点云。
  9. 根据权利要求7或8任一项所述的装置,其中,所述点云集合融合模块,还用于:
    将所述真实点云集合和所述伪点云集合进行拼接,生成候选点云集合;
    基于所述伪点云集合中每个第一点云的坐标信息和所述真实点云集合中每个第二点云的坐标信息,获取所述第一点云到所述真实点云集合的欧式距离;
    基于所述第一点云的欧式距离,从所述候选点云集合中选取点云,以生成所述目标点云集合。
  10. 根据权利要求9所述的装置,其中,所述点云集合融合模块,还用于:
    基于所述第一点云的欧式距离,生成所述第一点云的保留概率;
    获取所述第二点云预先配置的保留概率;
    对所述候选点云集合进行随机降采样,得到所述目标点云集合,其中,所述随机降采样使用的概率为所述保留概率。
  11. 根据权利要求9所述的装置,其中,所述点云集合融合模块,还用于:
    获取所述第二点云的坐标信息,获取所述真实点云集合的中心点坐标信息;
    基于所述第一点云的坐标信息和所述中心点坐标信息,确定所述欧式距离。
  12. 根据权利要求7所述的装置,其中,所述装置还包括:
    模型训练模块,用于利用所述目标点云集合,训练构建的3D 目标检测模型,以生成训练好的3D目标检测模型。
  13. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的方法。
  14. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-6中任一项所述的方法。
  15. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-6中任一项所述的步骤。
PCT/CN2022/088312 2021-05-21 2022-04-21 点云数据的生成方法和装置 WO2022242416A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020237008339A KR20230042383A (ko) 2021-05-21 2022-04-21 포인트 클라우드 데이터 생성 방법 및 장치
JP2022561443A JP2023529527A (ja) 2021-05-21 2022-04-21 点群データの生成方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110556351.0A CN113362444B (zh) 2021-05-21 2021-05-21 点云数据的生成方法、装置、电子设备及存储介质
CN202110556351.0 2021-05-21

Publications (1)

Publication Number Publication Date
WO2022242416A1 true WO2022242416A1 (zh) 2022-11-24

Family

ID=77526597

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088312 WO2022242416A1 (zh) 2021-05-21 2022-04-21 点云数据的生成方法和装置

Country Status (4)

Country Link
JP (1) JP2023529527A (zh)
KR (1) KR20230042383A (zh)
CN (1) CN113362444B (zh)
WO (1) WO2022242416A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168366A (zh) * 2023-01-19 2023-05-26 北京百度网讯科技有限公司 点云数据生成方法、模型训练方法、目标检测方法和装置
CN116222577A (zh) * 2023-04-27 2023-06-06 苏州浪潮智能科技有限公司 闭环检测方法、训练方法、***、电子设备及存储介质
CN116758006A (zh) * 2023-05-18 2023-09-15 广州广检建设工程检测中心有限公司 脚手架质量检测方法及装置
CN117058464A (zh) * 2023-08-31 2023-11-14 强联智创(北京)科技有限公司 对生成健康血管表面的生成模型进行训练的方法及设备
CN117115225A (zh) * 2023-09-01 2023-11-24 安徽羽亿信息科技有限公司 一种自然资源智慧综合信息化管理平台
CN117173342A (zh) * 2023-11-02 2023-12-05 中国海洋大学 基于水下单双目相机的自然光下移动三维重建装置及方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362444B (zh) * 2021-05-21 2023-06-16 北京百度网讯科技有限公司 点云数据的生成方法、装置、电子设备及存储介质
CN115235482A (zh) * 2021-09-28 2022-10-25 上海仙途智能科技有限公司 地图更新方法、装置、计算机设备及介质
CN115830262B (zh) * 2023-02-14 2023-05-26 济南市勘察测绘研究院 一种基于对象分割的实景三维模型建立方法及装置
KR102573935B1 (ko) * 2023-04-27 2023-09-04 주식회사 루트릭스 수목 데이터 처리 방법 및 장치
CN116577350A (zh) * 2023-07-13 2023-08-11 北京航空航天大学杭州创新研究院 物料表面毛球点云采集装置和物料表面毛球数据采集方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340797A (zh) * 2020-03-10 2020-06-26 山东大学 一种激光雷达与双目相机数据融合检测方法及***
CN112001958A (zh) * 2020-10-28 2020-11-27 浙江浙能技术研究院有限公司 基于有监督单目深度估计的虚拟点云三维目标检测方法
CN112419494A (zh) * 2020-10-09 2021-02-26 腾讯科技(深圳)有限公司 用于自动驾驶的障碍物检测、标记方法、设备及存储介质
CN113362444A (zh) * 2021-05-21 2021-09-07 北京百度网讯科技有限公司 点云数据的生成方法、装置、电子设备及存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180136332A1 (en) * 2016-11-15 2018-05-17 Wheego Electric Cars, Inc. Method and system to annotate objects and determine distances to objects in an image
CN108230379B (zh) * 2017-12-29 2020-12-04 百度在线网络技术(北京)有限公司 用于融合点云数据的方法和装置
US11378718B2 (en) * 2018-10-02 2022-07-05 Robert S. Phelan Unmanned aerial vehicle system and methods
US11494930B2 (en) * 2019-06-17 2022-11-08 SafeAI, Inc. Techniques for volumetric estimation
US11602974B2 (en) * 2019-08-29 2023-03-14 Here Global B.V. System and method for generating map data associated with road objects
CN111161202A (zh) * 2019-12-30 2020-05-15 上海眼控科技股份有限公司 车辆行为信息获取方法、装置、计算机设备和存储介质
CN111292369B (zh) * 2020-03-10 2023-04-28 中车青岛四方车辆研究所有限公司 激光雷达的伪点云数据生成方法
CN111652855B (zh) * 2020-05-19 2022-05-06 西安交通大学 一种基于存活概率的点云精简方法
CN111649752B (zh) * 2020-05-29 2021-09-21 北京四维图新科技股份有限公司 拥堵路段的地图数据处理方法、装置以及设备
CN111739005B (zh) * 2020-06-22 2023-08-08 北京百度网讯科技有限公司 图像检测方法、装置、电子设备及存储介质
CN111784659A (zh) * 2020-06-29 2020-10-16 北京百度网讯科技有限公司 图像检测的方法、装置、电子设备以及存储介质
CN111915746B (zh) * 2020-07-16 2022-09-13 北京理工大学 一种基于弱标注的三维点云目标检测方法及标注工具

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340797A (zh) * 2020-03-10 2020-06-26 山东大学 一种激光雷达与双目相机数据融合检测方法及***
CN112419494A (zh) * 2020-10-09 2021-02-26 腾讯科技(深圳)有限公司 用于自动驾驶的障碍物检测、标记方法、设备及存储介质
CN112001958A (zh) * 2020-10-28 2020-11-27 浙江浙能技术研究院有限公司 基于有监督单目深度估计的虚拟点云三维目标检测方法
CN113362444A (zh) * 2021-05-21 2021-09-07 北京百度网讯科技有限公司 点云数据的生成方法、装置、电子设备及存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168366A (zh) * 2023-01-19 2023-05-26 北京百度网讯科技有限公司 点云数据生成方法、模型训练方法、目标检测方法和装置
CN116168366B (zh) * 2023-01-19 2023-12-05 北京百度网讯科技有限公司 点云数据生成方法、模型训练方法、目标检测方法和装置
CN116222577A (zh) * 2023-04-27 2023-06-06 苏州浪潮智能科技有限公司 闭环检测方法、训练方法、***、电子设备及存储介质
CN116758006A (zh) * 2023-05-18 2023-09-15 广州广检建设工程检测中心有限公司 脚手架质量检测方法及装置
CN116758006B (zh) * 2023-05-18 2024-02-06 广州广检建设工程检测中心有限公司 脚手架质量检测方法及装置
CN117058464A (zh) * 2023-08-31 2023-11-14 强联智创(北京)科技有限公司 对生成健康血管表面的生成模型进行训练的方法及设备
CN117058464B (zh) * 2023-08-31 2024-06-11 强联智创(北京)科技有限公司 对生成健康血管表面的生成模型进行训练的方法及设备
CN117115225A (zh) * 2023-09-01 2023-11-24 安徽羽亿信息科技有限公司 一种自然资源智慧综合信息化管理平台
CN117115225B (zh) * 2023-09-01 2024-04-30 安徽羽亿信息科技有限公司 一种自然资源智慧综合信息化管理平台
CN117173342A (zh) * 2023-11-02 2023-12-05 中国海洋大学 基于水下单双目相机的自然光下移动三维重建装置及方法

Also Published As

Publication number Publication date
CN113362444B (zh) 2023-06-16
JP2023529527A (ja) 2023-07-11
CN113362444A (zh) 2021-09-07
KR20230042383A (ko) 2023-03-28

Similar Documents

Publication Publication Date Title
WO2022242416A1 (zh) 点云数据的生成方法和装置
JP6745328B2 (ja) 点群データを復旧するための方法及び装置
JP7106665B2 (ja) 単眼深度推定方法およびその装置、機器ならびに記憶媒体
CN110427917B (zh) 用于检测关键点的方法和装置
Schulter et al. Learning to look around objects for top-view representations of outdoor scenes
WO2019161813A1 (zh) 动态场景的三维重建方法以及装置和***、服务器、介质
JP2021114296A (ja) 近赤外画像の生成方法、近赤外画像の生成装置、生成ネットワークの訓練方法、生成ネットワークの訓練装置、電子機器、記憶媒体及びコンピュータプログラム
JP6471448B2 (ja) 視差深度画像のノイズ識別方法及びノイズ識別装置
WO2022257487A1 (zh) 深度估计模型的训练方法, 装置, 电子设备及存储介质
CN108230384B (zh) 图像深度计算方法、装置、存储介质和电子设备
WO2019079766A1 (en) SYSTEM, APPARATUS, METHOD FOR DATA PROCESSING, AND INFORMATION MEDIUM
WO2019169884A1 (zh) 基于深度信息的图像显著性检测方法和装置
US11676294B2 (en) Passive and single-viewpoint 3D imaging system
CN115861601B (zh) 一种多传感器融合感知方法及装置
CN115330940B (zh) 一种三维重建方法、装置、设备和介质
WO2022237821A1 (zh) 生成交通标志线地图的方法、设备和存储介质
WO2023216460A1 (zh) 基于鸟瞰图的多视角3d目标检测方法、存储器及***
US20230401826A1 (en) Perception network and data processing method
Liu et al. Microscopic 3D reconstruction based on point cloud data generated using defocused images
CN108352061B (zh) 用于产生表示像素光束的数据的装置和方法
CN117745944A (zh) 预训练模型确定方法、装置、设备以及存储介质
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
US20230115765A1 (en) Method and apparatus of transferring image, and method and apparatus of training image transfer model
CN113656629B (zh) 视觉定位方法、装置、电子设备及存储介质
CN113129352B (zh) 一种稀疏光场重建方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022561443

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22803740

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237008339

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22803740

Country of ref document: EP

Kind code of ref document: A1