CN113763307B - Sample data acquisition method and device - Google Patents

Sample data acquisition method and device Download PDF

Info

Publication number
CN113763307B
CN113763307B CN202010801961.8A CN202010801961A CN113763307B CN 113763307 B CN113763307 B CN 113763307B CN 202010801961 A CN202010801961 A CN 202010801961A CN 113763307 B CN113763307 B CN 113763307B
Authority
CN
China
Prior art keywords
target object
point cloud
cloud data
attribute information
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010801961.8A
Other languages
Chinese (zh)
Other versions
CN113763307A (en
Inventor
李梅
刘伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN202010801961.8A priority Critical patent/CN113763307B/en
Publication of CN113763307A publication Critical patent/CN113763307A/en
Application granted granted Critical
Publication of CN113763307B publication Critical patent/CN113763307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Geometry (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

According to the sample data acquisition method and device, when the sample data are acquired, the attribute information of the target object and the point cloud data corresponding to the target object are acquired, and the attribute information and the point cloud data are relatively real data of the target object, so that the depth image point cloud data obtained by fusing the relatively real attribute information and the point cloud data can accurately describe the target object to a certain extent, the accurate depth image point cloud data are input into an environment model corresponding to a preset environment, the image sample data of the target object in the preset environment are acquired, and the acquisition process of the sample data does not need manual operation of a user, so that the acquisition efficiency of the sample data is improved under the condition that the accuracy of the sample data is ensured.

Description

Sample data acquisition method and device
Technical Field
The present application relates to the field of computer vision, and in particular, to a method and apparatus for obtaining sample data.
Background
In an intelligent warehousing system, realizing automatic picking is a vital link in the whole warehousing operation system. When the intelligent picking robot picks an article in the intelligent warehouse, a three-dimensional model of the article in different environments needs to be acquired first, and then the object is identified in the intelligent warehouse based on the three-dimensional model, so that the article is picked.
When a three-dimensional model of the object in different environments is acquired, a large amount of sample data of the object in different environments needs to be acquired first, and then the three-dimensional model is constructed based on the large amount of sample data of the object in different environments. In the prior art, when acquiring a large amount of sample data of an article in different environments, a user is required to manually acquire a large amount of images in different environments, and label information is manually added in the large amount of images so as to acquire a large amount of sample data of the article in different environments.
It can be seen that the existing sample data acquisition mode is adopted, so that the sample data acquisition efficiency is lower.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring sample data, which improve the acquisition efficiency of the sample data when the sample data is acquired.
In a first aspect, an embodiment of the present application provides a method for acquiring sample data, where the method for acquiring sample data may include:
acquiring attribute information of a target object and point cloud data corresponding to the target object; wherein the attribute information includes texture, size, and weight.
And carrying out fusion processing on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object.
Inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment; wherein, the image sample data is marked with the attribute information of the target object.
In one possible implementation, the preset environment includes at least one of an occlusion environment, a collision environment, or a shadow environment.
In a possible implementation manner, obtaining the point cloud data corresponding to the target object may include:
acquiring point cloud data under the view of a three-dimensional camera; wherein the target object is included in the three-dimensional camera field of view.
And extracting point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
In one possible implementation manner, the method for acquiring sample data may further include:
Based on the image sample data, training and generating a deep learning model corresponding to the target object, wherein the deep learning model is used for identifying the target object.
In one possible implementation manner, the attribute information further includes a color, and the method for acquiring sample data may further include:
acquiring the color of a target object acquired by a two-dimensional camera; wherein the colors include colors of respective planes of the target object.
In a possible implementation manner, the extracting the point cloud data corresponding to the target object from the point cloud data in the three-dimensional camera view may include:
and extracting point cloud data corresponding to the target object from the point cloud data in the view field of the three-dimensional camera by adopting a point cloud segmentation algorithm.
In a second aspect, an embodiment of the present application further provides an apparatus for acquiring sample data, where the apparatus for acquiring sample data may include:
the acquisition module is used for acquiring attribute information of a target object and point cloud data corresponding to the target object; wherein the attribute information includes texture, size, and weight.
The processing module is used for carrying out fusion processing on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object; inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment; wherein, the image sample data is marked with the attribute information of the target object.
In one possible implementation, the preset environment includes at least one of an occlusion environment, a collision environment, or a shadow environment.
In one possible implementation manner, the acquiring module is specifically configured to acquire point cloud data under a field of view of the three-dimensional camera; wherein the three-dimensional camera field of view includes the target object; and extracting point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
In one possible implementation manner, the sample data obtaining device may further include a generating module.
The generation module is specifically configured to train and generate a deep learning model corresponding to the target object based on the image sample data, where the deep learning model is used to identify the target object.
In a possible implementation, the attribute information further includes a color.
The acquisition module is also used for acquiring the color of the target object acquired by the two-dimensional camera; wherein the colors include colors of respective planes of the target object.
In a possible implementation manner, the obtaining module is specifically configured to extract, by using a point cloud segmentation algorithm, point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
In a third aspect, an embodiment of the present application further provides a terminal, where the terminal may include a memory and a processor; wherein,
The memory is used for storing a computer program.
The processor is configured to read the computer program stored in the memory, and execute the sample data acquisition method according to any one of the first possible implementation manners according to the computer program in the memory.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium, where computer executable instructions are stored, where when a processor executes the computer executable instructions, the method for obtaining sample data according to any one of the first possible implementation manners is implemented.
Therefore, when the sample data is acquired, the attribute information of the target object and the point cloud data corresponding to the target object are acquired, and the attribute information and the point cloud data are relatively real data of the target object, so that the depth image point cloud data obtained by fusing the relatively real attribute information and the point cloud data can accurately describe the target object to a certain extent, the accurate depth image point cloud data is input into an environment model corresponding to a preset environment, the image sample data of the target object in the preset environment is obtained, and the acquisition process of the sample data does not need manual operation of a user, so that the acquisition efficiency of the sample data is improved under the condition that the accuracy of the sample data is ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
Fig. 2 is a flow chart of a method for acquiring sample data according to an embodiment of the present application;
FIG. 3 is a schematic view of a plane image acquired according to an embodiment of the present application;
FIG. 4 is a schematic view of another acquired planar image according to an embodiment of the present application;
FIG. 5 is a schematic view of another acquired planar image provided by an embodiment of the present application;
FIG. 6 is a schematic view of an image of another plane acquired by an embodiment of the present application;
FIG. 7 is a schematic diagram of planar depth image point cloud data according to an embodiment of the present application;
FIG. 8 is a schematic diagram of depth image point cloud data of another plane according to an embodiment of the present application;
FIG. 9 is a schematic diagram of depth image point cloud data of another plane according to an embodiment of the present application;
FIG. 10 is a schematic diagram of depth image point cloud data of another plane according to an embodiment of the present application;
FIG. 11 is a schematic diagram of depth image point cloud data according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a device for acquiring sample data according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/" generally indicates that the front-rear associated object is an or relationship.
The sample data acquisition method provided by the embodiment of the application can be applied to the scene of automatically picking the articles in the intelligent warehouse. When the intelligent picking robot picks an article in the intelligent warehouse, a three-dimensional model of the article in different environments needs to be acquired first, and then the object is identified in the intelligent warehouse based on the three-dimensional model, so that the article is picked. When the three-dimensional model of the object under different environments is acquired, sample data needed for constructing the three-dimensional model is acquired first. In the prior art, when acquiring a large amount of sample data of an article in different environments, a user is required to manually acquire a large amount of images in different environments, and label information is manually added in the large amount of images so as to acquire a large amount of sample data of the article in different environments. However, the existing sample data acquisition mode is adopted, so that the sample data acquisition efficiency is low.
In order to improve the acquisition efficiency of the sample data, automatic acquisition of the sample data may be attempted, and when the sample data is automatically acquired, an easy-to-think scheme is as follows: CAD models of objects at different viewpoints can be built first, then the built CAD models are input into physical engine simulation corresponding to environments such as rotation, collision and stacking, sample data are synthesized, and therefore automatic acquisition of the sample data is achieved. However, in the sample data acquisition mode, since objects in a real environment are affected by light or shielding, but the CAD model of the object is constructed without the effects, when the CAD model without the effects is input into a physical engine simulation corresponding to the environment such as rotation, collision, stacking and the like, the sample data is synthesized, a large difference exists between the synthesized sample data and the real sample data, and thus the accuracy of the acquired sample data is low.
Based on this, in order to improve the acquisition efficiency of the sample data while ensuring the accuracy of the sample data, as shown in fig. 1, fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application, when the intelligent robot automatically picks the article a in the intelligent warehouse, the intelligent robot may first acquire the sample data of the article a through the data acquisition device, train and generate a deep learning model corresponding to the article a based on the acquired sample data of the article a, and send the generated deep learning model to the intelligent robot, so that the intelligent robot identifies the article a according to the deep learning model, thereby picking the article a. Or the data acquisition device is only responsible for acquiring the attribute information and the point cloud data of the article A, and after acquiring the attribute information and the point cloud data corresponding to the target object of the article A, the attribute information and the point cloud data of the article A are sent to a terminal, such as an intelligent robot, so that the intelligent robot acquires the attribute information and the point cloud data of the article A, trains and generates a deep learning model corresponding to the article A based on the acquired sample data, and the intelligent robot can identify the article A according to the deep learning model, thereby selecting the article A.
Based on the scene shown in fig. 1, the embodiment of the application provides a sample data acquisition method, when sample data is acquired, attribute information of a target object and point cloud data corresponding to the target object can be acquired first, the attribute information comprises texture, size and weight, and fusion processing is performed on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object; and inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment, wherein the image sample data is marked with attribute information of the target object.
Therefore, according to the sample data acquisition method provided by the embodiment of the application, when the sample data is acquired, the attribute information of the target object and the point cloud data corresponding to the target object are acquired, and the attribute information and the point cloud data are relatively real data of the target object, so that the depth image point cloud data obtained by fusing the relatively real attribute information and the point cloud data can accurately describe the target object to a certain extent, the accurate depth image point cloud data is input into an environment model corresponding to a preset environment, the image sample data of the target object in the preset environment is obtained, and the acquisition process of the sample data does not need manual operation of a user, so that the acquisition efficiency of the sample data is improved under the condition of ensuring the accuracy of the sample data.
The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flow chart of a method for obtaining sample data according to an embodiment of the present application, where the method for obtaining sample data may be performed by a software and/or hardware device, for example, the hardware device may be a data collection device or a terminal, for example, an intelligent robot as described above. For example, referring to fig. 2, the method for obtaining sample data may include:
S201, acquiring attribute information of a target object and point cloud data corresponding to the target object.
Wherein the attribute information includes texture, size, and weight.
For example, when acquiring the texture of the target object, the texture of the target object may be acquired together by the two-dimensional camera and the three-dimensional camera; when the size of the target object is acquired, the size of the target object can be acquired through a three-dimensional camera; when the weight of the target object is acquired, the weight of the target object can be acquired through a weighing device, so that the attribute information of the target object is acquired. It may be appreciated that in the embodiment of the present application, the two-dimensional camera, the three-dimensional camera, and the weighing device used for acquiring the attribute information of the target object may be disposed on the data acquisition device, so that the attribute information of the target object is acquired through the data acquisition device.
When the attribute information of the target object is acquired, the attribute information may include other information such as color, article serial number, and the like in addition to texture, size, and weight. When the attribute information includes a color, the color of the target object can be acquired by the two-dimensional camera; wherein the colors include colors of respective planes of the target object, that is, colors of respective planes of the target object need to be acquired by a two-dimensional camera. When the color of each plane of the target object is collected by the two-dimensional camera, taking the color of a cuboid box as an example, the image of each plane of the cuboid box can be firstly taken by the two-dimensional camera, as shown in fig. 3-6, taking four planes of the cuboid box as examples, fig. 3 is a schematic image of one plane of the collection provided by the embodiment of the application, fig. 4 is a schematic image of another plane of the collection provided by the embodiment of the application, fig. 5 is a schematic image of another plane of the collection provided by the embodiment of the application, and fig. 6 is a schematic image of another plane of the collection provided by the embodiment of the application, after the images of the four planes are collected, the color of the plane can be obtained by extraction, so that the color of each plane of the target object is collected by the two-dimensional camera.
When the point cloud data corresponding to the target object is acquired, the point cloud data corresponding to the target object can be acquired through the three-dimensional camera. For example, when the point cloud data corresponding to the target object is acquired through the three-dimensional camera, the point cloud data under the field of view of the three-dimensional camera may be acquired first; since the three-dimensional camera field of view may include other background information in addition to the target object, it is necessary to extract point cloud data corresponding to the target object from the point cloud data in the three-dimensional camera field of view, thereby acquiring the point cloud data corresponding to the target object.
For example, when extracting point cloud data corresponding to a target object from point cloud data in a view of a three-dimensional camera, a point cloud segmentation algorithm may be adopted to extract point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera, so as to obtain the point cloud data corresponding to the target object. It may be understood that, besides extracting the point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera by using the point cloud segmentation algorithm, the point cloud data corresponding to the target object may be extracted from the point cloud data in the view of the three-dimensional camera by using the 2D template matching algorithm, and may be specifically set according to actual needs.
After the attribute information of the target object and the point cloud data corresponding to the target object are respectively acquired, since the point cloud data corresponding to the target object does not include the attribute information of the target object, the target object cannot be accurately described only by the point cloud data, and therefore, fusion processing needs to be performed on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object, that is, the following S202 is executed:
s202, fusion processing is carried out on the attribute information of the target object and the point cloud data corresponding to the target object, and depth image point cloud data corresponding to the target object is obtained.
The depth image point cloud data may be understood as RGB-D data, among other things.
When the attribute information of the target object and the point cloud data corresponding to the target object are fused, the attribute information of the target object and the point cloud data corresponding to the target object can be registered first, and the registered attribute information of the target object and the registered point cloud data corresponding to the target object are fused, so that the depth image point cloud data corresponding to the target object is obtained.
The attribute information of the cuboid object shown in fig. 3 to 6 and the point cloud data may be fused to obtain depth image point cloud data corresponding to the cuboid object, where the depth image point cloud data includes depth image point cloud data of each of six planes of the cuboid object. Taking four planes as an example, fig. 7 is a schematic diagram of depth image point cloud data of one plane provided in the embodiment of the present application, fig. 8 is a schematic diagram of depth image point cloud data of another plane provided in the embodiment of the present application, fig. 9 is a schematic diagram of depth image point cloud data of another plane provided in the embodiment of the present application, and fig. 10 is a schematic diagram of depth image point cloud data of another plane provided in the embodiment of the present application. 7-10, because the attribute information and the point cloud data are more real data of the cuboid object, the depth image point cloud data obtained by fusing the more real attribute information and the point cloud data can accurately describe the cuboid object to a certain extent, and is the depth image point cloud data corresponding to the cuboid object in a real scene.
S203, inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment.
Wherein, the attribute information of the target object is marked in the image sample data.
By way of example, the preset environment includes at least one of an occlusion environment, a collision environment, or a shadow environment. It should be understood that, in the embodiment of the present application, only the preset environment may include at least one of a blocking environment, a collision environment, or a shadow environment, which is illustrated, but the embodiment of the present application is not limited thereto.
When accurate depth image point cloud data is input into an environment model corresponding to a preset environment to obtain image sample data of a target object in the preset environment, the preset environment is an environment in a simulated real scene, so that the obtained image sample data in the preset environment is sample data of the target object in the real scene, and the acquisition process of the sample data does not need manual operation by a user, so that the acquisition efficiency of the sample data is improved under the condition of ensuring the accuracy of the sample data.
7-10, When the accurate depth image point cloud data of the cuboid object is input into an environment model corresponding to a preset environment to obtain image sample data of the cuboid object in the preset environment, the preset environment is an environment in a simulated real scene, so that the obtained image sample data in the preset environment is sample data of the cuboid object in the real scene, as shown in FIG. 11, FIG. 11 is a schematic diagram of the depth image point cloud data provided by the embodiment of the application.
Therefore, according to the sample data acquisition method provided by the embodiment of the application, when the sample data is acquired, the attribute information of the target object and the point cloud data corresponding to the target object are acquired, and the attribute information and the point cloud data are relatively real data of the target object, so that the depth image point cloud data obtained by fusing the relatively real attribute information and the point cloud data can accurately describe the target object to a certain extent, the accurate depth image point cloud data is input into an environment model corresponding to a preset environment, the image sample data of the target object in the preset environment is obtained, and the acquisition process of the sample data does not need manual operation of a user, so that the acquisition efficiency of the sample data is improved under the condition of ensuring the accuracy of the sample data.
Based on the embodiment shown in fig. 1, after the image sample data corresponding to the target object is obtained, the initial deep learning model may be trained based on the obtained image sample data corresponding to the target object, so as to generate a deep learning model corresponding to the target object, and after the deep learning model corresponding to the target object is generated, the deep learning model may be sent to the intelligent robot, so that the intelligent robot may identify the target object in the intelligent warehouse based on the deep learning model, thereby picking out the target object.
Fig. 12 is a schematic structural diagram of a sample data obtaining apparatus 120 according to an embodiment of the present application, for example, referring to fig. 12, the sample data obtaining apparatus 120 may include:
An obtaining module 1201, configured to obtain attribute information of a target object and point cloud data corresponding to the target object; wherein the attribute information includes texture, size, and weight.
The processing module 1202 is configured to perform fusion processing on attribute information of a target object and point cloud data corresponding to the target object, so as to obtain depth image point cloud data corresponding to the target object; inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of a target object in the preset environment; wherein, the attribute information of the target object is marked in the image sample data.
Optionally, the preset environment includes at least one of an occlusion environment, a collision environment, or a shadow environment.
Optionally, the acquiring module 1201 is specifically configured to acquire point cloud data in a view of the three-dimensional camera; wherein the three-dimensional camera field of view includes a target object; and extracting point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
Optionally, the sample data obtaining device 120 further includes a generating module 1203.
The generating module 1203 is specifically configured to train and generate a deep learning model corresponding to the target object based on the image sample data, where the deep learning model is used to identify the target object.
Optionally, the attribute information further includes color; the acquisition module 1201 is further configured to acquire a color of a target object acquired by the two-dimensional camera; wherein the colors include colors of respective planes of the target object.
Optionally, the obtaining module 1201 is specifically configured to extract, by using a point cloud segmentation algorithm, point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
The sample data obtaining device 120 provided in the embodiment of the present application may execute the technical scheme of the sample data obtaining method in any of the embodiments, and the implementation principle and the beneficial effects of the technical scheme are similar to those of the sample data obtaining method, and may refer to the implementation principle and the beneficial effects of the sample data obtaining method, which are not described herein.
Fig. 13 is a schematic structural diagram of a terminal 130 according to an embodiment of the present invention, for example, referring to fig. 13, the terminal 130 may include a processor 1301 and a memory 1302; wherein,
The memory 1302 is used for storing a computer program.
The processor 1301 is configured to read a computer program stored in the memory 1302, and execute the technical solution of the sample data acquiring method in any one of the foregoing embodiments according to the computer program in the memory 1302.
Alternatively, memory 1302 may be separate or integrated with processor 1301. When memory 1302 is a device separate from processor 1301, the terminal may further include: a bus connecting the memory 1302 and the processor 1301.
Optionally, the present embodiment further includes: a communication interface, which may be connected to the processor 1301 by a bus. Processor 1301 may control the communication interface to implement the functions of receiving and transmitting of the above-described terminal.
The terminal 130 in the embodiment of the present invention may execute the technical scheme of the sample data obtaining method in any of the foregoing embodiments, and the implementation principle and beneficial effects of the method are similar to those of the sample data obtaining method, and may refer to the implementation principle and beneficial effects of the sample data obtaining method, which are not described herein.
The embodiment of the invention also provides a computer readable storage medium, in which computer executing instructions are stored, when a processor executes the computer executing instructions, the technical scheme of the sample data obtaining method in any of the above embodiments is implemented, and the implementation principle and beneficial effects are similar to those of the sample data obtaining method, and can be seen, and will not be repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection illustrated or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some steps of the methods of the embodiments of the invention.
It should be understood that the above Processor may be a central processing unit (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (english: DIGITAL SIGNAL Processor, abbreviated as DSP), application specific integrated circuits (english: application SPECIFIC INTEGRATED Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present invention are not limited to only one bus or to one type of bus.
The computer-readable storage medium described above may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (14)

1. A method for obtaining sample data, comprising:
Acquiring attribute information of a target object and point cloud data corresponding to the target object; wherein the attribute information includes texture, size, and weight;
Performing fusion processing on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object;
Inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment; wherein, the image sample data is marked with the attribute information of the target object;
The fusing processing is performed on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object, including:
Registering the attribute information of the target object and the point cloud data corresponding to the target object, and carrying out fusion processing on the registered attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object;
the preset environment includes a shadow environment.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
The preset environment further includes at least one of an occlusion environment or a collision environment.
3. The method of claim 1, wherein obtaining point cloud data corresponding to the target object comprises:
Acquiring point cloud data under the view of a three-dimensional camera; wherein the three-dimensional camera field of view includes the target object;
And extracting point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
4. The method according to claim 1, wherein the method further comprises:
Based on the image sample data, training and generating a deep learning model corresponding to the target object, wherein the deep learning model is used for identifying the target object.
5. The method of claim 1, wherein the attribute information further comprises a color, the method further comprising:
acquiring the color of a target object acquired by a two-dimensional camera; wherein the colors include colors of respective planes of the target object.
6. The method of claim 3, wherein the extracting the point cloud data corresponding to the target object from the point cloud data in the three-dimensional camera field of view comprises:
and extracting point cloud data corresponding to the target object from the point cloud data in the view field of the three-dimensional camera by adopting a point cloud segmentation algorithm.
7. An acquisition apparatus for sample data, comprising:
The acquisition module is used for acquiring attribute information of a target object and point cloud data corresponding to the target object; wherein the attribute information includes texture, size, and weight;
The processing module is used for carrying out fusion processing on the attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object; inputting the depth image point cloud data into an environment model corresponding to a preset environment to obtain image sample data of the target object in the preset environment; wherein, the image sample data is marked with the attribute information of the target object;
The processing module is specifically configured to register attribute information of the target object and point cloud data corresponding to the target object, and perform fusion processing on the registered attribute information of the target object and the point cloud data corresponding to the target object to obtain depth image point cloud data corresponding to the target object;
the preset environment includes a shadow environment.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
The preset environment further includes at least one of an occlusion environment or a collision environment.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
The acquisition module is specifically used for acquiring point cloud data under the view field of the three-dimensional camera; wherein the three-dimensional camera field of view includes the target object; and extracting point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera.
10. The apparatus of claim 7, wherein the apparatus further comprises a generation module;
the generation module is specifically configured to train and generate a deep learning model corresponding to the target object based on the image sample data, where the deep learning model is used to identify the target object.
11. The apparatus of claim 7, wherein the attribute information further comprises color;
The acquisition module is also used for acquiring the color of the target object acquired by the two-dimensional camera; wherein the colors include colors of respective planes of the target object.
12. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
The acquisition module is specifically configured to extract point cloud data corresponding to the target object from the point cloud data in the view of the three-dimensional camera by using a point cloud segmentation algorithm.
13. A terminal comprising a memory and a processor; wherein,
The memory is used for storing a computer program;
The processor is configured to read the computer program stored in the memory, and execute the sample data acquisition method according to any one of claims 1 to 6 according to the computer program stored in the memory.
14. A computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, which when executed by a processor, implement the method for obtaining sample data according to any of the preceding claims 1-6.
CN202010801961.8A 2020-08-11 2020-08-11 Sample data acquisition method and device Active CN113763307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010801961.8A CN113763307B (en) 2020-08-11 2020-08-11 Sample data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010801961.8A CN113763307B (en) 2020-08-11 2020-08-11 Sample data acquisition method and device

Publications (2)

Publication Number Publication Date
CN113763307A CN113763307A (en) 2021-12-07
CN113763307B true CN113763307B (en) 2024-06-18

Family

ID=78785674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010801961.8A Active CN113763307B (en) 2020-08-11 2020-08-11 Sample data acquisition method and device

Country Status (1)

Country Link
CN (1) CN113763307B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898354A (en) * 2022-03-24 2022-08-12 中德(珠海)人工智能研究院有限公司 Measuring method and device based on three-dimensional model, server and readable storage medium
CN117689980B (en) * 2024-02-04 2024-05-24 青岛海尔科技有限公司 Method for constructing environment recognition model, method, device and equipment for recognizing environment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401133A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Target data augmentation method, device, electronic device and readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544095B (en) * 2017-07-28 2019-03-08 河南工程学院 A kind of method that Three Dimensional Ground laser point cloud is merged with ground penetrating radar image
CN108460414B (en) * 2018-02-27 2019-09-17 北京三快在线科技有限公司 Generation method, device and the electronic equipment of training sample image
US10832469B2 (en) * 2018-08-06 2020-11-10 Disney Enterprises, Inc. Optimizing images for three-dimensional model construction
CN109471128B (en) * 2018-08-30 2022-11-22 福瑞泰克智能***有限公司 Positive sample manufacturing method and device
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN110163904B (en) * 2018-09-11 2022-04-22 腾讯大地通途(北京)科技有限公司 Object labeling method, movement control method, device, equipment and storage medium
US11391844B2 (en) * 2018-12-19 2022-07-19 Fca Us Llc Detection and tracking of road-side pole-shaped static objects from LIDAR point cloud data
CN110120091B (en) * 2019-04-28 2023-06-16 深圳供电局有限公司 Method and device for manufacturing electric power inspection image sample and computer equipment
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN111161398B (en) * 2019-12-06 2023-04-21 苏州智加科技有限公司 Image generation method, device, equipment and storage medium
CN111047693A (en) * 2019-12-27 2020-04-21 浪潮(北京)电子信息产业有限公司 Image training data set generation method, device, equipment and medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401133A (en) * 2020-02-19 2020-07-10 北京三快在线科技有限公司 Target data augmentation method, device, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN113763307A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN107223269B (en) Three-dimensional scene positioning method and device
CN104268498B (en) A kind of recognition methods of Quick Response Code and terminal
CN110246163B (en) Image processing method, image processing device, image processing apparatus, and computer storage medium
CN111563923A (en) Method for obtaining dense depth map and related device
EP3660703A1 (en) Method, apparatus, and system for identifying device, storage medium, processor, and terminal
CN112446919A (en) Object pose estimation method and device, electronic equipment and computer storage medium
CN113763307B (en) Sample data acquisition method and device
CN112528831A (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
CN108230395A (en) Stereoscopic image is calibrated and image processing method, device, storage medium and electronic equipment
CN113807451B (en) Panoramic image feature point matching model training method and device and server
CN111191582B (en) Three-dimensional target detection method, detection device, terminal device and computer readable storage medium
CN108230384A (en) Picture depth computational methods, device, storage medium and electronic equipment
CN115205383A (en) Camera pose determination method and device, electronic equipment and storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
CN111161348B (en) Object pose estimation method, device and equipment based on monocular camera
JP6785181B2 (en) Object recognition device, object recognition system, and object recognition method
CN111563965A (en) Method and device for generating panorama by optimizing depth map
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN112819963A (en) Batch differential modeling method for tree branch model and related equipment
CN114708230B (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN111832494B (en) Information storage method and device
CN113160414A (en) Automatic identification method and device for remaining amount of goods, electronic equipment and computer readable medium
Mitra et al. Seethrough: Finding objects in heavily occluded indoor scene images
CN113781653A (en) Object model generation method and device, electronic equipment and storage medium
CN113536868A (en) Circuit board fault identification method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant