CN111325984B - Sample data acquisition method and device and electronic equipment - Google Patents

Sample data acquisition method and device and electronic equipment Download PDF

Info

Publication number
CN111325984B
CN111325984B CN202010192073.0A CN202010192073A CN111325984B CN 111325984 B CN111325984 B CN 111325984B CN 202010192073 A CN202010192073 A CN 202010192073A CN 111325984 B CN111325984 B CN 111325984B
Authority
CN
China
Prior art keywords
image
target object
background
augmented reality
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010192073.0A
Other languages
Chinese (zh)
Other versions
CN111325984A (en
Inventor
尚子钰
潘杰
张浩悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202010192073.0A priority Critical patent/CN111325984B/en
Publication of CN111325984A publication Critical patent/CN111325984A/en
Application granted granted Critical
Publication of CN111325984B publication Critical patent/CN111325984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Atmospheric Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a method, a device and electronic equipment for acquiring sample data, which can be used for automatic driving, and particularly belongs to the technical field of autonomous parking. The method comprises the following steps: firstly, respectively acquiring a background image and a background annotation image marked with attribute information of the background image; rendering the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image marked with attribute information of the target object; the augmented reality image, the background labeling image and the target object labeling image are sample data which need to be acquired and comprise the target object. It can be seen that when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data comprising the target object is finally generated by using the background image and the 3D model of the target object, so that the time consumption for acquiring and labeling the sample data can be reduced, and the acquisition efficiency of the sample data is improved.

Description

Sample data acquisition method and device and electronic equipment
Technical Field
The application relates to the technical field of data processing, in particular to the technical field of automatic driving.
Background
In an autopilot scenario, visual perception is typically based on a deep learning model, which requires a large number of training samples to obtain. When the training samples are obtained, the problems of difficult collection, multiple gesture states, high labeling cost and the like exist, so that some training samples, such as small sample data, are difficult to obtain, and the accuracy of the deep learning model obtained by training is low because of the lack of small sample data in a large number of training samples for training.
In order to improve the accuracy of the deep learning model obtained by training, in the prior art, small sample data lacking in a training sample is generally directly collected and marked. However, collecting data and labeling data costs a lot of time and high economic cost, and for some small samples, such as police cars, fire-fighting cars, etc., it is difficult to collect a lot of relevant sample data of police cars, fire-fighting cars in different scenes because the data are not commonly seen in daily life.
Therefore, the existing sample data acquisition method is adopted, so that the acquisition efficiency of the sample data is lower.
Disclosure of Invention
The embodiment of the application provides a method, a device and electronic equipment for acquiring sample data, which improve the acquisition efficiency of the sample data when acquiring the sample data.
In a first aspect, an embodiment of the present application provides a method for acquiring sample data, where the method for acquiring sample data may include:
respectively acquiring a background image and a background labeling image; wherein, the background labeling image is labeled with the attribute information of the background.
And rendering the 3D model of the target object and the background image to obtain an augmented reality image.
Rendering the 3D model of the target object to obtain a target object annotation image; the target object annotation image is marked with attribute information of the target object, and sample data comprises the augmented reality image, the background annotation image and the target object annotation image.
It can be seen that, unlike the prior art, when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data including the target object is finally generated by using the background image and the 3D model of the target object, so that a large amount of small sample data which cannot be acquired in the prior art can be generated in a short time, the acquisition time and the labeling time of the sample data are reduced, the acquisition efficiency of the sample data is improved, and compared with the sample data which is synthesized directly with the background image and is not smooth, the accuracy of the acquired sample data is improved.
In a possible implementation manner, the rendering the 3D model of the target object and the background image to obtain the augmented reality image may include:
acquiring key information affecting a rendering effect; wherein the key information includes at least one of model parameters of a 3D model of the target object, photographing parameters when the background image is acquired, or environmental parameters when the background image is acquired.
And rendering the 3D model of the target object, the key information and the background image to obtain the augmented reality image.
It can be seen that when the 3D model and the background image of the target object are rendered to obtain the augmented reality image, the obtained augmented reality image is more realistic, vivid and diversified by rendering the key information which will affect the sample data together with the 3D model and the background image of the target object.
In a possible implementation manner, the key information includes model parameters of a 3D model of the target object, and rendering the 3D model of the target object, the key information, and the background image to obtain the augmented reality image may include:
Rendering the 3D model of the target object, the model parameters and the background image to obtain the augmented reality image; wherein the model parameters include coordinate system parameters and/or rotation angles.
Correspondingly, the rendering the 3D model of the target object to obtain a target object annotation image includes:
and rendering the 3D model of the target object and the model parameters to obtain the target object annotation image.
Therefore, in the possible scene, when the augmented reality image is rendered, the model parameters including the coordinate system parameters and/or the rotation angle are rendered together with the 3D model of the target object and the background image, so that the augmented reality image including the target object and the road background in different postures can be obtained, and the obtained augmented reality image is more diversified.
In one possible implementation, the shooting parameters include a shooting focal length and/or an electromechanical coefficient.
Therefore, in the possible scene, when the augmented reality image is rendered, the 3D model of the target object and the background image are rendered together by the shooting parameters including the shooting focal length and/or the mechanical transformation coefficient, so that the augmented reality image including the target object and the road background at different shooting angles can be obtained, and the obtained augmented reality image is more diversified.
In one possible implementation, the environmental parameters include illumination intensity and/or air quality.
In this way, in the possible scene, when the augmented reality image is rendered, the environment parameters including illumination intensity and/or air quality are rendered together with the 3D model of the target object and the background image, so that the augmented reality image including the target object and the road background under different illumination intensities and/or air quality can be obtained, and the obtained augmented reality image is more realistic, vivid and diversified.
In one possible implementation manner, the method for acquiring sample data may further include:
and judging whether the image is blocked in the augmented reality image.
Generating an augmented reality annotation image corresponding to the augmented reality image according to the judgment result; the augmented reality annotation image is marked with the attribute information of the background and the attribute information of the target object in the augmented reality image, so that the accuracy of the acquired sample data can be improved.
In a possible implementation manner, the generating the augmented reality annotation image corresponding to the augmented reality image according to the determination result may include:
If the image is not blocked in the augmented reality image, the current background labeling image is the labeling data of the background in the augmented reality image, and the target object labeling image is the labeling data of the target object in the augmented reality image. Therefore, in this case, when the augmented reality annotation image corresponding to the augmented reality image is acquired, the background annotation image and the target object annotation image can be directly synthesized, so that the augmented reality annotation image corresponding to the augmented reality image can be obtained. The augmented reality annotation image is marked with attribute information of a background and attribute information of a target object in the augmented reality image.
If the image is blocked in the augmented reality image, the current background marked image is not marked data of the background in the augmented reality image, the attribute information of the blocked image needs to be deleted in the background marked image, and the obtained new background marked image is marked data of the background in the augmented reality image. Therefore, in this case, when the augmented reality annotation image corresponding to the augmented reality image is acquired, the attribute information of the blocked image needs to be deleted from the background annotation image to obtain a new background annotation image; and then, synthesizing the new background marked image and the target object marked image, so that an augmented reality marked image corresponding to the augmented reality image can be obtained, the marked data of the blocked image in the background marked image can be avoided, and the accuracy of the acquired sample data is further improved.
In one possible implementation manner, the method for acquiring sample data may further include:
and acquiring a 3D model set, and searching a 3D model of the target object in the 3D model set, so as to acquire the 3D model of the target object.
In a second aspect, an embodiment of the present application further provides an apparatus for acquiring sample data, where the method for acquiring sample data may include:
the acquisition module is used for respectively acquiring a background image and a background labeling image; wherein, the background labeling image is labeled with the attribute information of the background.
The processing module is used for rendering the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image; the target object annotation image is marked with attribute information of the target object, and sample data comprises the augmented reality image, the background annotation image and the target object annotation image.
In one possible implementation manner, the processing module is specifically configured to obtain key information affecting the rendering effect; rendering the 3D model of the target object, the key information and the background image to obtain the augmented reality image; wherein the key information includes at least one of model parameters of a 3D model of the target object, photographing parameters when the background image is acquired, or environmental parameters when the background image is acquired.
In a possible implementation manner, the key information includes model parameters of a 3D model of the target object, and the processing module is specifically configured to:
rendering the 3D model of the target object, the model parameters and the background image to obtain the augmented reality image; wherein the model parameters include coordinate system parameters and/or rotation angles.
Correspondingly, the processing module is further specifically configured to:
and rendering the 3D model of the target object and the model parameters to obtain the target object annotation image.
In one possible implementation, the shooting parameters include a shooting focal length and/or an electromechanical coefficient.
In one possible implementation, the environmental parameters include illumination intensity and/or air quality.
In a possible implementation manner, the processing module is further configured to determine whether an image is occluded in the augmented reality image; generating an augmented reality annotation image corresponding to the augmented reality image according to the judgment result; the augmented reality annotation image is marked with attribute information of a background and attribute information of a target object in the augmented reality image.
In a possible implementation manner, the processing module is specifically configured to perform a synthesis process on the background labeling image and the target object labeling image if no image is blocked in the augmented reality image, so as to obtain an augmented reality labeling image corresponding to the augmented reality image; if the image is blocked in the augmented reality image, deleting attribute information of the blocked image in the background labeling image to obtain a new background labeling image; and synthesizing the new background annotation image and the target object annotation image to obtain an augmented reality annotation image corresponding to the augmented reality image.
In one possible implementation manner, the acquiring module is specifically configured to acquire a 3D model set; and searching the 3D model of the target object in the 3D model set.
In a third aspect, embodiments of the present application further provide an electronic device, which may include:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of obtaining sample data as described in any one of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method for acquiring sample data described in any one of the possible implementations of the first aspect.
One embodiment of the above application has the following advantages or benefits: when sample data are acquired, respectively acquiring a background image and a background labeling image labeled with attribute information of a background; rendering the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image marked with attribute information of the target object; the obtained augmented reality image, background labeling image and target object labeling image are sample data which need to be obtained and comprise the target object. It can be seen that, unlike the prior art, when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data including the target object is finally generated by using the background image and the 3D model of the target object, so that a large amount of small sample data which cannot be acquired in the prior art can be generated in a short time, the acquisition time and the labeling time of the sample data are reduced, and the acquisition efficiency of the sample data is improved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a scene graph that may implement a method of acquiring sample data according to embodiments of the present application;
fig. 2 is a flowchart of a method for acquiring sample data according to a first embodiment of the present application;
fig. 3 is a schematic diagram of a background labeling image of a parking lot according to a first embodiment of the present application;
fig. 4 is a schematic diagram of a 3D model of a police car according to a first embodiment of the present application;
FIG. 5 is a schematic diagram of an internal screenshot of a renderer provided in a first embodiment of the present application;
fig. 6 is a schematic diagram of an augmented reality image provided in the first embodiment of the present application;
fig. 7 is a flowchart of a method for acquiring an augmented reality image according to a second embodiment of the present application;
fig. 8 is a schematic diagram of acquiring an augmented reality image and a target object labeling image according to a second embodiment of the present application;
FIG. 9 is a schematic diagram of a 3D model and augmented reality image of a police car in a first pose as provided by a second embodiment of the present application;
FIG. 10 is a schematic illustration of a 3D model and augmented reality image of a police car in a second pose as provided by a second embodiment of the present application;
Fig. 11 is a flowchart of a method for acquiring an augmented reality annotation image corresponding to an augmented reality image according to a third embodiment of the present application;
fig. 12 is a schematic structural view of a sample data acquisition device according to a fourth embodiment of the present application;
fig. 13 is a block diagram of an electronic device of a method of acquiring sample data according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/" generally indicates that the front-rear association object is an or relationship.
It can be appreciated that the sample data acquisition method provided by the embodiment of the application can be applied to an automatic driving scene. In order to ensure the safety of the vehicle driving, the road condition in front of the driving needs to be perceived, and in general, the visual perception is performed based on a deep learning model. However, a large amount of training samples are required for constructing the deep learning model, especially for some small sample data, for example, the target object is a police car, and the background is sample data of a parking lot, for example, please refer to fig. 1, fig. 1 is a scene diagram of a sample data acquisition method capable of implementing an embodiment of the present application, and when acquiring sample data of a police car staying in a parking lot, the police car is not commonly found in daily life, which makes the sample data acquisition efficiency lower.
In order to improve the acquisition efficiency of sample data, it is possible to attempt to extract a police car from an image containing the police car by means of mapping, and attach the extracted police car to another image of a parking lot, to thereby synthesize sample data of the police car parked in the parking lot. However, when the method is adopted, although the acquisition efficiency of the sample data can be improved to a certain extent, the method can only scratch out the police car in the existing image, and directly map and synthesize the image of the police car which stays in the parking lot, so that the synthesized image of the police car which stays in the parking lot is not smooth and is greatly different from the actually acquired data, and the acquired sample data has lower accuracy.
Based on the above discussion, in order to improve the acquisition efficiency of sample data, the embodiment of the application provides a method for acquiring sample data, wherein when acquiring sample data, a background image and a background labeling image labeled with attribute information of a background are acquired respectively; rendering the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image marked with attribute information of the target object; the obtained augmented reality image, background labeling image and target object labeling image are sample data which need to be obtained and comprise the target object. It can be seen that, unlike the prior art, when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data including the target object is finally generated by using the background image and the 3D model of the target object, so that a large amount of small sample data which cannot be acquired in the prior art can be generated in a short time, the acquisition time and the labeling time of the sample data are reduced, the acquisition efficiency of the sample data is improved, and compared with the sample data which is synthesized directly with the background image and is not smooth, the accuracy of the acquired sample data is improved.
Wherein the types of the target object and the background are related to the application scene of the sample data. For example, when the application scenario is an autopilot scenario, the target object may be an police car or a fire car, and the background may be a parking lot or a road, for example. When the target object is a police car, the attribute information of the target object can be information such as the position of the police car and the size of the police car; when the target object is a fire engine, the attribute information of the target object may be information such as a position of the fire engine and a size of the fire engine. When the background is a parking lot, the attribute information of the background can be information such as the position of the parking lot and the size of a parking space; when the background is a road, the attribute information of the background can be information such as the position of the road, the number of lanes of the road, the width of lanes of the road and the like; of course, the method can also be applied to other scenes, such as a table with a target object in a house decoration scene, and the background image is of a different house type. In the following description, a method for acquiring sample data provided in the embodiment of the present application will be described by taking a target object, which may be a police car, and a background, which may be a parking lot, as an example, but the embodiment of the present application is not limited thereto.
In general, when sample data for training a deep learning model is acquired, not only sample image data but also annotation data corresponding to the sample image data need to be acquired, where the sample image data and the annotation data corresponding to the sample image data are a complete set of sample data. In the embodiment of the application, rendering is performed on the 3D model and the background image of the target object, the obtained augmented reality image can be understood as sample image data, rendering is performed on the background annotation image and the 3D model of the target object, the obtained target object annotation image can be understood as annotation data corresponding to the sample image data, and the augmented reality image, the background annotation image and the target object annotation image are a set of complete sample data comprising the target object.
It should be noted that, in the embodiment of the present application, only because the small sample data is difficult to collect and label in the process of collecting and labeling the sample data, the method for obtaining the sample data provided by the embodiment of the present application is not limited to obtaining some small sample data, but may also obtain some large sample data, that is, sample data including a common target object car, and the embodiment of the present application is only described by taking the method for obtaining the sample data provided by the embodiment of the present application as an example, but is not limited to this.
Hereinafter, a method for acquiring sample data provided in the present application will be described in detail by way of specific examples. It is to be understood that the following embodiments may be combined with each other and that some embodiments may not be repeated for the same or similar concepts or processes.
Example 1
Fig. 2 is a flow chart of a method for obtaining sample data according to the first embodiment of the present application, where the method for obtaining sample data may be performed by software and/or hardware devices, for example, the hardware device may be a device for obtaining sample data, and the device for obtaining sample data may be provided in an electronic device. For example, referring to fig. 2, the method for obtaining sample data may include:
s201, respectively acquiring a background image and a background labeling image.
Wherein, the background labeling image is labeled with the attribute information of the background. By way of example, the background may be a parking lot or a road. When the background is a parking lot, the attribute information of the background can be information such as the position of the parking lot and the size of a parking space; when the background is a road, the attribute information of the background may be information such as a position of the road, the number of lanes of the road, and the width of lanes of the road.
For example, when the background image is acquired, as shown in fig. 1, taking the background as an example of a parking lot, when the parking lot image is acquired, the parking lot image may be acquired by an acquisition device (for example, a camera), or the parking lot image sent by another device may also be received, and of course, the parking lot image may also be acquired by another manner, where the manner of acquiring the parking lot image is not specifically limited. It can be appreciated that in the embodiment of the present application, after the parking lot image is acquired, the parking lot image may be marked by a professional marking person, so as to acquire a background marking image of the parking lot. For example, please refer to fig. 3, fig. 3 is a schematic diagram of a background labeling image of a parking lot according to a first embodiment of the present application.
After the background image and the background label image are acquired, respectively, the following S202 may be performed:
and S202, rendering the 3D model of the target object and the background image to obtain an augmented reality image.
For example, the target object may be a police car or a fire truck, and when the target object is a police car, the attribute information of the target object may be information such as a position of the police car and a size of the police car; when the target object is a fire engine, the attribute information of the target object may be information such as a position of the fire engine and a size of the fire engine.
It will be appreciated that prior to rendering the 3D model of the target object and the background image, it is necessary to acquire the 3D model of the target object. For example, taking a target object as an example of a police car as shown in fig. 1, when a 3D model of the police car is obtained, a 3D model set may be obtained first, and a 3D model of the police car is searched in the 3D model set, so as to obtain the 3D model of the police car, and for example, please refer to fig. 4, fig. 4 is a schematic diagram of the 3D model of the police car provided in the first embodiment of the present application.
After the 3D model and the parking lot image of the police car are respectively acquired, the 3D model and the parking lot image of the police car can be input into a renderer, and an example of an internal screenshot of the renderer can be shown in fig. 5, fig. 5 is a schematic diagram of the internal screenshot of the renderer provided in the first embodiment of the present application, and the 3D model and the parking lot image of the police car are rendered to obtain a 2D augmented reality image, where the 2D augmented reality image can be understood as sample image data in sample data to be acquired. It is understood that the 2D augmented reality image includes both the target object police car and the background parking lot. For example, referring to fig. 6, fig. 6 is a schematic diagram of an augmented reality image according to a first embodiment of the present application.
Since, when the sample data including the police car is acquired, not only the augmented reality image including the police car is required to be acquired, but also the annotation data corresponding to the augmented reality image including the police car is required to be acquired, the annotation images are the police car annotation image and the parking lot annotation image in the embodiment of the present application, in S201, the parking lot annotation image corresponding to the parking lot image is already acquired together when the parking lot image is acquired, and therefore, the police car annotation image is required to be acquired again, that is, the following S203 is executed:
and S203, rendering the 3D model of the target object to obtain a target object annotation image.
The target object annotation image is marked with attribute information of the target object. For example, the attribute information of the target object may include information such as a position of the target object and a size of the target object.
After the 3D model of the police car is obtained, the 3D model of the police car can be input into a renderer for rendering to obtain a police car annotation image, and the police car annotation image is obtained by removing the parking lot image in the augmented reality image shown in fig. 6 as shown in fig. 6, and then combining the parking lot annotation image obtained in S201 and the augmented reality image comprising the police car obtained in S202, and the three are taken together to obtain a set of sample data comprising the police car.
It should be noted that, in the embodiment of the present application, there is no sequence between S202 and S203, and S202 may be executed first, and then S203 may be executed. S203 may be executed first, and S202 may be executed later; of course, S202 and S203 may be performed simultaneously, and may be specifically set according to actual needs, where the embodiment of the present application only uses S202 and then S203 as an example for illustration, but the embodiment of the present application is not limited thereto.
It can be seen that, unlike the prior art, when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data including the target object is finally generated by using the background image and the 3D model of the target object, so that a large amount of small sample data which cannot be acquired in the prior art can be generated in a short time, the acquisition time and the labeling time of the sample data are reduced, the acquisition efficiency of the sample data is improved, and compared with the sample data which is synthesized directly with the background image and is not smooth, the accuracy of the acquired sample data is improved.
It can be seen that the embodiment shown in fig. 2 above describes in detail how to obtain the sample data in the embodiment of the present application. In the following description, taking a target object as a police car and taking a background as a parking lot as an example, in a real scene, when image data comprising the police car is acquired, the position angle of the police car is changeable, and the position angle can be realized through model parameters of a 3D model of the police car; similarly, when the collected image data including the parking lot is collected, the shooting parameters and/or the environmental parameters are also changeable, so in order to make the obtained sample data including the police car more diversified and more realistic, the key information which can affect the sample data can be considered during rendering, that is, the 3D model and the background image of the target object are rendered in S202, and when the augmented reality image is obtained, the key information and the 3D and the background image corresponding to the target object model can be rendered together, so as to obtain the augmented reality image. By way of example, the key information may include at least one of model parameters of a 3D model of the target object, photographing parameters when a background image is acquired, or environmental parameters when a background image is acquired. Next, a detailed description will be given of how to acquire the augmented reality image in combination with the key information, respectively, and reference may be made to the following second embodiment.
Example two
Fig. 7 is a flowchart of an augmented reality image acquisition method according to a second embodiment of the present application, and as shown in fig. 7, for example, the augmented reality image acquisition method may include:
s701, acquiring key information affecting a rendering effect.
The key information includes at least one of a model parameter of the 3D model of the target object, a shooting parameter when the background image is acquired, or an environmental parameter when the background image is acquired, and of course, other parameters may also be included, and may be specifically set according to actual needs. It can be understood that the more parameters included in the key information, the more the key information, the 3D model of the target object and the background image are rendered together, and the more the degree of diversification and the degree of realism of the augmented reality image are obtained by rendering.
For example, in the embodiment of the present application, the model parameters include coordinate system parameters and/or rotation angles; the shooting parameters can comprise shooting focal length, mechanical transformation coefficient and other internal parameters; the environmental parameters may include illumination intensity and/or air quality; of course, other parameters may be included, and may be specifically set according to actual needs, where embodiments of the present application are not limited further.
After acquiring the key information affecting the rendering effect, the key information may be rendered together with the 3D model of the target object and the background image, that is, the following S702 is performed:
and S702, rendering the 3D model of the target object, the key information and the background image to obtain an augmented reality image.
For example, in one possible scenario, when the key information is a model parameter of a 3D model of a target object, when an augmented reality image is rendered, the 3D model of the target object, a model parameter including a coordinate system parameter and/or a rotation angle, and a background image may be input into a renderer together, and the 3D model of the target object, the model parameter including the coordinate system parameter and/or the rotation angle, and the background image may be rendered, thereby obtaining the augmented reality image. It should be noted that, when an augmented reality image including a target object is rendered, if a model parameter including a coordinate system parameter and/or a rotation angle is added, in order to make a target object annotation image corresponding to the augmented reality image consistent with annotation data of a target object in the augmented reality image, the target object in the augmented reality image and the target object in the target object annotation image need to keep consistent gesture angles, and the sizes and the magnitudes of the target object and the target object in the target object annotation image are consistent, so when the target object annotation image is rendered, a 3D model of the target object and the model parameter including the coordinate system parameter and/or the rotation angle can be input into a renderer together, and the 3D model of the target object and the model parameter including the coordinate system parameter and/or the rotation angle are rendered, thereby obtaining the target object annotation image.
It can be understood that when the model parameters including the coordinate system parameters and/or the rotation angle are rendered together with the 3D model of the target object and the background image to obtain the augmented reality image, and the model parameters including the coordinate system parameters and/or the rotation angle are rendered together with the 3D model of the target object to obtain the target object annotation image, the model parameters have different values, and the pose of the target object in the image generated based on the model parameters is different.
Therefore, in the possible scene, when the augmented reality image is rendered, the model parameters including the coordinate system parameters and/or the rotation angle are rendered together with the 3D model of the target object and the background image, so that the augmented reality image including the target object and the road background in different postures can be obtained, and the obtained augmented reality image is more diversified.
In another possible implementation manner, when the key information is a shooting parameter, the 3D model of the target object, the shooting parameter including the shooting focal length and/or the mechanical variable coefficient, and the background image may be input into the renderer together when the augmented reality image is rendered, and the 3D model of the target object, the shooting parameter including the shooting focal length and/or the mechanical variable coefficient, and the background image may be rendered, so as to obtain the augmented reality image.
It can be understood that when the 3D model of the target object, the shooting parameters including the shooting focal length and/or the mechanical transformation coefficient, and the background image are rendered together to obtain the augmented reality image, the values of the shooting parameters are different, and the pose of the target object in the image generated based on the shooting parameters is different.
Therefore, in the possible scene, when the augmented reality image is rendered, the 3D model of the target object and the background image are rendered together by the shooting parameters including the shooting focal length and/or the mechanical transformation coefficient, so that the augmented reality image including the target object and the road background at different shooting angles can be obtained, and the obtained augmented reality image is more diversified.
In yet another possible implementation manner, when the key information is an environmental parameter, the 3D model of the target object, the environmental parameter including the illumination intensity and/or the air quality, and the background image may be input together into the renderer when the augmented reality image is rendered, and the 3D model of the target object, the environmental parameter including the illumination intensity and/or the air quality, and the background image are rendered, so that the augmented reality image is obtained.
It can be appreciated that when the 3D model of the target object, the environmental parameters including the illumination intensity and/or the air quality, and the background image are rendered together to obtain the augmented reality image, the values of the environmental parameters are different, and the poses of the target object in the image generated based on the environmental parameters are different.
In this way, in the possible scene, when the augmented reality image is rendered, the environment parameters including illumination intensity and/or air quality are rendered together with the 3D model of the target object and the background image, so that the augmented reality image including the target object and the road background under different illumination intensities and/or air quality can be obtained, and the obtained augmented reality image is more realistic, vivid and diversified.
It can be understood that, in the three possible scenarios, how to render the 3D model of the target object, the key information, and the background image to obtain the augmented reality image when the key information is the model parameter of the 3D model of the target object, the shooting parameter when the background image is acquired, or the environmental parameter when the background image is acquired, is described respectively. Of course, when the augmented reality image is acquired, the key information includes three parameters, namely, a model parameter of a 3D model of the target object, a shooting parameter when the background image is acquired and an environment parameter when the background image is acquired, and the 3D model of the target object, the three parameters and the background image are rendered to obtain the augmented reality image; in addition, the 3D model of the target object and the model parameters including the coordinate system parameters and/or the rotation angle can be input into the renderer together, and the 3D model of the target object and the model parameters including the coordinate system parameters and/or the rotation angle are rendered, so that the target object annotation image is obtained. For example, referring to fig. 8, fig. 8 is a schematic diagram of acquiring an augmented reality image and a target object labeling image according to a second embodiment of the present application. Thus, the obtained augmented reality image is more realistic, vivid and diversified. It should be noted that, in the embodiment of the present application, the rendering method for rendering the 3D model of the target object, the three parameters, and the background image is similar to the rendering method for rendering the 3D model of the target object, any one of the three parameters, and the background image in the three possible scenes, and may be referred to the description in the three possible implementation manners, and here, the embodiment of the present application will not be repeated.
For example, continuing taking a target object as a police car and taking a background as an example of a parking lot, when the key information includes coordinate system parameters, rotation angles, shooting focal lengths and illumination intensity, and the parameter values in the key information are first numerical values, the posture of the police car in a 3D model of the police car and an augmented reality image generated based on the 3D model can be the first posture, and for example, as shown in fig. 9, fig. 9 is a schematic diagram of the 3D model and the augmented reality image of the police car in the first posture provided in the second embodiment of the present application. When the key information includes coordinate system parameters, rotation angles, shooting focal lengths and illumination intensity, and the parameter values in the key information are second numerical values, the posture of the police car in the 3D model of the police car and the augmented reality image generated based on the 3D model can be the second posture, and as an example, referring to fig. 10, fig. 10 is a schematic diagram of the 3D model and the augmented reality image of the police car in the second posture provided in the second embodiment of the present application. As is apparent from a combination of fig. 9 and 10, when the parameter values in the key information are different, the posture of the police car in the image generated based on the key information is different.
In the embodiment shown in fig. 2 or the embodiment shown in fig. 7, when the 3D model of the target object and the background image are rendered, there may be a part of background that is blocked by the target object in the obtained augmented reality image including the target object, and if no further labeling data processing is performed, the accuracy of the obtained sample data is not high. Therefore, in order to improve accuracy of the acquired sample data, after the augmented reality image and the background labeling image are acquired respectively, whether the image is blocked in the augmented reality image or not can be further judged, and the augmented reality labeling image corresponding to the augmented reality image is generated according to the judging result, so that the augmented reality labeling image corresponding to the augmented reality image is obtained, and see the third embodiment described below.
Example III
Fig. 11 is a flowchart of a method for acquiring an augmented reality annotation image corresponding to an augmented reality image according to a third embodiment of the present application, and as an example, referring to fig. 11, the method for acquiring sample data may further include:
s1101, judging whether an image is blocked in the augmented reality image.
And S1102, if no image is blocked in the augmented reality image, synthesizing the background labeling image and the target object labeling image to obtain an augmented reality labeling image corresponding to the augmented reality image.
After judgment, if no image is blocked in the augmented reality image, the current background marked image is marked data of the background in the augmented reality image, and the target object marked image is marked data of the target object in the augmented reality image. Therefore, in this case, when the augmented reality annotation image corresponding to the augmented reality image is acquired, the background annotation image and the target object annotation image can be directly synthesized, so that the augmented reality annotation image corresponding to the augmented reality image can be obtained. The augmented reality annotation image is marked with attribute information of a background and attribute information of a target object in the augmented reality image.
S1103, deleting attribute information of the blocked image from the background marked image if the image is blocked in the augmented reality image, so as to obtain a new background marked image; and synthesizing the new background labeling image and the target object labeling image to obtain an augmented reality labeling image corresponding to the augmented reality image.
After judging, if the image is blocked in the augmented reality image, the current background marked image is not marked data of the background in the augmented reality image, and the attribute information of the blocked image needs to be deleted in the background marked image, so that the obtained new background marked image is marked data of the background in the augmented reality image. Therefore, in this case, when the augmented reality annotation image corresponding to the augmented reality image is acquired, the attribute information of the blocked image needs to be deleted from the background annotation image to obtain a new background annotation image; and then, synthesizing the new background marked image and the target object marked image, so that an augmented reality marked image corresponding to the augmented reality image can be obtained, the marked data of the blocked image in the background marked image can be avoided, and the accuracy of the acquired sample data is further improved.
It should be noted that, the generation of the labeling data in the sample data corresponds to the application of the sample data, if the sample data is used for training the detection model, the boundary frame labeling data of the obstacle needs to be generated according to the augmented reality image and the background labeling image, and the problem that whether the image is blocked in the augmented reality image needs to be considered; after the bounding box annotation data is generated, the image detection model may be trained using the augmented reality image and the bounding box annotation data. If the sample data is used for training the segmentation model, the segmentation annotation data is generated according to the augmented reality image and the background annotation image, and then the image segmentation model is trained by using the augmented reality image and the segmentation annotation data. If the sample data is used for training the classification model, determining one classification label data of the augmented reality image is needed, and then training the image classification model by using the augmented reality image and the classification label data.
Example IV
Fig. 12 is a schematic structural diagram of a sample data obtaining apparatus 120 according to a fourth embodiment of the present application, and as shown in fig. 12, for example, the sample data obtaining apparatus 120 may include:
an acquisition module 1201, configured to acquire a background image and a background annotation image respectively; wherein, the background labeling image is labeled with the attribute information of the background.
The processing module 1202 is configured to render the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image; the target object annotation image is marked with attribute information of a target object, and the sample data comprises an augmented reality image, a background annotation image and the target object annotation image.
Optionally, the processing module 1202 is specifically configured to obtain key information affecting the rendering effect; rendering the 3D model, the key information and the background image of the target object to obtain an augmented reality image; wherein the key information includes at least one of model parameters of a 3D model of the target object, photographing parameters when the background image is acquired, or environmental parameters when the background image is acquired.
Optionally, the key information includes model parameters of a 3D model of the target object, and the processing module 1202 is specifically configured to:
rendering the 3D model, the model parameters and the background image of the target object to obtain an augmented reality image; wherein the model parameters include coordinate system parameters and/or rotation angles.
Correspondingly, the processing module 1202 is further specifically configured to:
and rendering the 3D model and model parameters of the target object to obtain a target object annotation image.
Optionally, the shooting parameters include a shooting focal length and/or an electromechanical coefficient.
Optionally, the environmental parameters include illumination intensity and/or air quality.
Optionally, the processing module 1202 is further configured to determine whether an image is blocked in the augmented reality image; generating an augmented reality annotation image corresponding to the augmented reality image according to the judgment result; the augmented reality annotation image is marked with attribute information of a background in the augmented reality image and attribute information of a target object.
Optionally, the processing module 1202 is specifically configured to, if no image in the augmented reality image is blocked, perform a synthesis process on the background labeling image and the target object labeling image, so as to obtain an augmented reality labeling image corresponding to the augmented reality image; if the image is blocked in the augmented reality image, deleting attribute information of the blocked image in the background marked image to obtain a new background marked image; and synthesizing the new background labeling image and the target object labeling image to obtain an augmented reality labeling image corresponding to the augmented reality image.
Optionally, an acquiring module 1201 is specifically configured to acquire a 3D model set; and searching the 3D model of the target object in the 3D model set.
The sample data obtaining device 120 provided in this embodiment may execute the technical scheme of the sample data obtaining method in any of the above embodiments, and the implementation principle and beneficial effects of the technical scheme are similar to those of the sample data obtaining method, and may refer to the implementation principle and beneficial effects of the sample data obtaining method, which are not described herein.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 13, fig. 13 is a block diagram of an electronic device of a sample data acquisition method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 13, the electronic device includes: one or more processors 1301, memory 1302, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1301 is illustrated in fig. 13.
Memory 1302 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the sample data acquisition method provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the sample data acquisition method provided by the present application.
The memory 1302 is used as a non-transitory computer readable storage medium and is used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 1201 and the processing module 1202 shown in fig. 12) corresponding to the sample data acquisition method in the embodiments of the present application. The processor 1301 executes various functional applications of the server and data processing, that is, implements the sample data acquisition method in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 1302.
Memory 1302 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the acquisition method of sample data, and the like. In addition, memory 1302 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1302 may optionally include memory located remotely from processor 1301, which may be connected to the electronic device of the sample data acquisition method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the sample data acquisition method may further include: an input device 1303 and an output device 1304. The processor 1301, memory 1302, input device 1303, and output device 1304 may be connected by a bus or other means, for example in fig. 13.
The input device 1303 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device of the sample data acquisition method, such as input devices of a touch screen, a keypad, a mouse, a track pad, a touch pad, a joystick, one or more mouse buttons, a track ball, a joystick, and the like. The output device 1304 may include a display device, auxiliary lighting (e.g., LEDs), and haptic feedback (e.g., a vibrating motor), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, when sample data are acquired, a background image and a background labeling image labeled with attribute information of a background are acquired respectively; rendering the 3D model of the target object and the background image to obtain an augmented reality image; rendering the 3D model of the target object to obtain a target object annotation image marked with attribute information of the target object; the obtained augmented reality image, background labeling image and target object labeling image are sample data which need to be obtained and comprise the target object. It can be seen that, unlike the prior art, when the sample data is acquired, the embodiment of the application is based on the augmented reality technology, and the sample data including the target object is finally generated by using the background image and the 3D model of the target object, so that a large amount of small sample data which cannot be acquired in the prior art can be generated in a short time, the acquisition time and the labeling time of the sample data are reduced, the acquisition efficiency of the sample data is improved, and compared with the sample data which is synthesized directly with the background image and is not smooth, the accuracy of the acquired sample data is improved.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (12)

1. A method for obtaining sample data, comprising:
respectively acquiring background images and background labeling images under different scenes; wherein, the background labeling image is labeled with attribute information of the background;
acquiring key information affecting a rendering effect; the key information comprises model parameters of a 3D model of the target object, wherein the model parameters comprise coordinate system parameters and/or rotation angles;
Rendering the 3D model of the target object, the key information and the background image to obtain an augmented reality image;
rendering the 3D model of the target object and the model parameters to obtain a target object annotation image; the target object annotation image is marked with attribute information of the target object, and sample data comprises the augmented reality image, the background annotation image and the target object annotation image;
the method further comprises the steps of:
judging whether an image is blocked in the augmented reality image or not;
if no image is blocked in the augmented reality image, synthesizing the background labeling image and the target object labeling image to obtain an augmented reality labeling image corresponding to the augmented reality image;
if the image is blocked in the augmented reality image, deleting attribute information of the blocked image in the background labeling image to obtain a new background labeling image; synthesizing the new background annotation image and the target object annotation image to obtain an augmented reality annotation image corresponding to the augmented reality image; the augmented reality annotation image is marked with attribute information of a background and attribute information of a target object in the augmented reality image.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the key information also comprises at least one of shooting parameters when the background image is acquired and environment parameters when the background image is acquired.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
and if the key information comprises shooting parameters when the background image is acquired, the shooting parameters comprise shooting focal length and/or mechanical transformation coefficient.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
if the key information includes environmental parameters when the background image is acquired, the environmental parameters include illumination intensity and/or air quality.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a 3D model set;
and searching the 3D model of the target object in the 3D model set.
6. An acquisition apparatus for sample data, comprising:
the acquisition module is used for respectively acquiring background images and background labeling images under different scenes; wherein, the background labeling image is labeled with attribute information of the background;
the processing module is used for acquiring key information affecting the rendering effect; the key information comprises model parameters of a 3D model of the target object, wherein the model parameters comprise coordinate system parameters and/or rotation angles;
Rendering the 3D model of the target object, the key information and the background image to obtain an augmented reality image;
rendering the 3D model of the target object and the model parameters to obtain a target object annotation image; the target object annotation image is marked with attribute information of the target object, and sample data comprises the augmented reality image, the background annotation image and the target object annotation image;
the processing module is further used for judging whether an image is blocked in the augmented reality image or not; if no image is blocked in the augmented reality image, synthesizing the background labeling image and the target object labeling image to obtain an augmented reality labeling image corresponding to the augmented reality image;
if the image is blocked in the augmented reality image, deleting attribute information of the blocked image in the background labeling image to obtain a new background labeling image; synthesizing the new background annotation image and the target object annotation image to obtain an augmented reality annotation image corresponding to the augmented reality image; the augmented reality annotation image is marked with attribute information of a background and attribute information of a target object in the augmented reality image.
7. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the key information also comprises at least one of shooting parameters when the background image is acquired and environment parameters when the background image is acquired.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
and if the key information comprises shooting parameters when the background image is acquired, the shooting parameters comprise shooting focal length and/or mechanical transformation coefficient.
9. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
if the key information includes environmental parameters when the background image is acquired, the environmental parameters include illumination intensity and/or air quality.
10. The apparatus according to claim 6 or 7, wherein,
the acquisition module is specifically used for acquiring a 3D model set; and searching the 3D model of the target object in the 3D model set.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of acquiring sample data according to any one of the preceding claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of acquiring sample data according to any one of claims 1-5.
CN202010192073.0A 2020-03-18 2020-03-18 Sample data acquisition method and device and electronic equipment Active CN111325984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010192073.0A CN111325984B (en) 2020-03-18 2020-03-18 Sample data acquisition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010192073.0A CN111325984B (en) 2020-03-18 2020-03-18 Sample data acquisition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111325984A CN111325984A (en) 2020-06-23
CN111325984B true CN111325984B (en) 2023-05-05

Family

ID=71167674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010192073.0A Active CN111325984B (en) 2020-03-18 2020-03-18 Sample data acquisition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111325984B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130019485A (en) * 2011-08-17 2013-02-27 (주)인사이트앤드인스퍼레이션 Augmented reality system based on marker and object augment method thereof
CN108898678A (en) * 2018-07-09 2018-11-27 百度在线网络技术(北京)有限公司 Augmented reality method and apparatus
CN109377552A (en) * 2018-10-19 2019-02-22 珠海金山网络游戏科技有限公司 Image occlusion test method, apparatus calculates equipment and storage medium
CN110619674A (en) * 2019-08-15 2019-12-27 重庆特斯联智慧科技股份有限公司 Three-dimensional augmented reality equipment and method for accident and alarm scene restoration
JP2020008917A (en) * 2018-07-03 2020-01-16 株式会社Eidea Augmented reality display system, augmented reality display method, and computer program for augmented reality display
CN110837764A (en) * 2018-08-17 2020-02-25 广东虚拟现实科技有限公司 Image processing method and device, electronic equipment and visual interaction system

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6864888B1 (en) * 1999-02-25 2005-03-08 Lockheed Martin Corporation Variable acuity rendering for a graphic image processing system
US6574360B1 (en) * 1999-07-23 2003-06-03 International Business Machines Corp. Accelerated occlusion culling using directional discretized occluders and system therefore
CN101976341B (en) * 2010-08-27 2013-08-07 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN102163340A (en) * 2011-04-18 2011-08-24 宁波万里电子科技有限公司 Method for labeling three-dimensional (3D) dynamic geometric figure data information in computer system
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
US9704055B2 (en) * 2013-11-07 2017-07-11 Autodesk, Inc. Occlusion render mechanism for point clouds
CN107025642B (en) * 2016-01-27 2018-06-22 百度在线网络技术(北京)有限公司 Vehicle's contour detection method and device based on point cloud data
GB2551396B (en) * 2016-06-17 2018-10-10 Imagination Tech Ltd Augmented reality occlusion
CN107689073A (en) * 2016-08-05 2018-02-13 阿里巴巴集团控股有限公司 The generation method of image set, device and image recognition model training method, system
CN106803286A (en) * 2017-01-17 2017-06-06 湖南优象科技有限公司 Mutual occlusion real-time processing method based on multi-view image
JP6828587B2 (en) * 2017-05-22 2021-02-10 トヨタ自動車株式会社 Image processing system, image processing method, information processing device and recording medium
US10692289B2 (en) * 2017-11-22 2020-06-23 Google Llc Positional recognition for augmented reality environment
CN108564103A (en) * 2018-01-09 2018-09-21 众安信息技术服务有限公司 Data processing method and device
US10867214B2 (en) * 2018-02-14 2020-12-15 Nvidia Corporation Generation of synthetic images for training a neural network model
EP4254349A3 (en) * 2018-07-02 2023-12-06 MasterCard International Incorporated Methods for generating a dataset of corresponding images for machine vision learning
CN109155078B (en) * 2018-08-01 2023-03-31 达闼机器人股份有限公司 Method and device for generating set of sample images, electronic equipment and storage medium
CN109166170A (en) * 2018-08-21 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for rendering augmented reality scene
CN109145489B (en) * 2018-09-07 2020-01-17 百度在线网络技术(北京)有限公司 Obstacle distribution simulation method and device based on probability chart and terminal
CN109635853A (en) * 2018-11-26 2019-04-16 深圳市玛尔仕文化科技有限公司 The method for automatically generating artificial intelligence training sample based on computer graphics techniques
CN109522866A (en) * 2018-11-29 2019-03-26 宁波视睿迪光电有限公司 Naked eye 3D rendering processing method, device and equipment
CN110084304B (en) * 2019-04-28 2021-04-30 北京理工大学 Target detection method based on synthetic data set
CN110288019A (en) * 2019-06-21 2019-09-27 北京百度网讯科技有限公司 Image labeling method, device and storage medium
CN110378336A (en) * 2019-06-24 2019-10-25 南方电网科学研究院有限责任公司 Semantic class mask method, device and the storage medium of target object in training sample
CN110490960B (en) * 2019-07-11 2023-04-07 创新先进技术有限公司 Synthetic image generation method and device
CN110428388B (en) * 2019-07-11 2023-08-08 创新先进技术有限公司 Image data generation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130019485A (en) * 2011-08-17 2013-02-27 (주)인사이트앤드인스퍼레이션 Augmented reality system based on marker and object augment method thereof
JP2020008917A (en) * 2018-07-03 2020-01-16 株式会社Eidea Augmented reality display system, augmented reality display method, and computer program for augmented reality display
CN108898678A (en) * 2018-07-09 2018-11-27 百度在线网络技术(北京)有限公司 Augmented reality method and apparatus
CN110837764A (en) * 2018-08-17 2020-02-25 广东虚拟现实科技有限公司 Image processing method and device, electronic equipment and visual interaction system
CN109377552A (en) * 2018-10-19 2019-02-22 珠海金山网络游戏科技有限公司 Image occlusion test method, apparatus calculates equipment and storage medium
CN110619674A (en) * 2019-08-15 2019-12-27 重庆特斯联智慧科技股份有限公司 Three-dimensional augmented reality equipment and method for accident and alarm scene restoration

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐维鹏 ; 王涌天 ; 刘越 ; 翁冬冬 ; .增强现实中的虚实遮挡处理综述.计算机辅助设计与图形学学报.2013,(第11期),全文. *
李大成 ; 刘娜 ; .基于增强现实摄像机虚拟标签的设计与管理.现代计算机(专业版).2018,(第25期),全文. *

Also Published As

Publication number Publication date
CN111325984A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
US20200302241A1 (en) Techniques for training machine learning
KR20210097762A (en) Image processing method, apparatus and device, and storage medium
US20210383096A1 (en) Techniques for training machine learning
CN111860167B (en) Face fusion model acquisition method, face fusion model acquisition device and storage medium
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
CN111753961B (en) Model training method and device, prediction method and device
CN112270669B (en) Human body 3D key point detection method, model training method and related devices
CN111832745B (en) Data augmentation method and device and electronic equipment
CN111722245B (en) Positioning method, positioning device and electronic equipment
CN111274974A (en) Positioning element detection method, device, equipment and medium
CN111324945B (en) Sensor scheme determining method, device, equipment and storage medium
CN112001248B (en) Active interaction method, device, electronic equipment and readable storage medium
CN111539347B (en) Method and device for detecting target
CN112581533B (en) Positioning method, positioning device, electronic equipment and storage medium
US20210374977A1 (en) Method for indoor localization and electronic device
CN111858996B (en) Indoor positioning method and device, electronic equipment and storage medium
CN111753739A (en) Object detection method, device, equipment and storage medium
CN110866504B (en) Method, device and equipment for acquiring annotation data
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN111597987B (en) Method, apparatus, device and storage medium for generating information
CN111597986B (en) Method, apparatus, device and storage medium for generating information
CN113129423B (en) Method and device for acquiring three-dimensional model of vehicle, electronic equipment and storage medium
CN115008454A (en) Robot online hand-eye calibration method based on multi-frame pseudo label data enhancement
CN111325984B (en) Sample data acquisition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211022

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, *** building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant