CN114782620B - Aircraft image generation method and device based on three-dimensional rendering - Google Patents

Aircraft image generation method and device based on three-dimensional rendering Download PDF

Info

Publication number
CN114782620B
CN114782620B CN202210211000.0A CN202210211000A CN114782620B CN 114782620 B CN114782620 B CN 114782620B CN 202210211000 A CN202210211000 A CN 202210211000A CN 114782620 B CN114782620 B CN 114782620B
Authority
CN
China
Prior art keywords
dimensional
airplane
image
aircraft
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210211000.0A
Other languages
Chinese (zh)
Other versions
CN114782620A (en
Inventor
周杰
李健
陈宏昊
邓磊
陈宝华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210211000.0A priority Critical patent/CN114782620B/en
Publication of CN114782620A publication Critical patent/CN114782620A/en
Application granted granted Critical
Publication of CN114782620B publication Critical patent/CN114782620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses an aircraft image generation method and device based on three-dimensional rendering, wherein the method comprises the following steps: generating an airplane three-dimensional model with the same shape as an airplane through three-dimensional modeling, acquiring airplane pictures matched with the airplane three-dimensional model, generating texture maps according to the airplane pictures, loading the texture maps onto the airplane three-dimensional model to obtain a textured airplane three-dimensional model, and rendering the textured airplane three-dimensional model into an airplane two-dimensional image set, wherein the airplane two-dimensional image set comprises a plurality of airplane two-dimensional images rendered under different environment parameters and shooting parameters, labeling information of airplane features corresponding to each airplane two-dimensional image, and carrying out image enhancement based on a depth neural network according to the airplane two-dimensional image set. The invention greatly improves the quantity and the richness of available airplane pictures and reduces the required manual labeling cost.

Description

Aircraft image generation method and device based on three-dimensional rendering
Technical Field
The present invention relates to the field of image rendering, and in particular, to a method and apparatus for generating an aircraft image based on three-dimensional rendering.
Background
With the continuous progress of the aviation industry, the intellectualization of airports becomes a new trend of the development of modern airports. Aircraft is one of the important service objects of airports, and an intelligent method for aircraft-end service generally uses aircraft pictures as information sources, such as identifying aircraft frames by taking aircraft surface pictures, guiding boarding bridges to automatically dock onto doors through aircraft door pictures, and the like. These intelligent methods typically require a large number of specific model pictures of the aircraft and accurate manual labeling to achieve model training, testing, and optimization. However, during the landing of the project, these pictures and labels are often difficult to obtain, mainly for the following three reasons:
Firstly, due to confidentiality and security requirements of an airport, corresponding shooting tools cannot be directly erected on the airport to complete image acquisition, and although a large number of airplane images can be obtained on a network, the images have a larger difference from actual working scenes and cannot be used for model training and testing. Fig. 1 shows the type of picture required when docking through an aircraft door picture guidance corridor bridge, a picture of this mode not being available on the internet;
Second, the texture of the picture itself and the shooting environment are complex. On the one hand, the texture of the aircraft itself is complex, such as different colors of the cabin, different painting, etc. On the other hand, the shooting environment of the pictures is complex, and severe weather can influence the picture quality. Under the influence of the two factors, even if an airport can be accessed to collect pictures, the pictures which can cover all the complex situations are difficult to collect, so that the generalization of the model is influenced;
third, the cost of manual labeling is high and requires a long duty cycle.
There are no commercially available methods for generating aircraft pictures in the industry, but there are some attempts in the academia to generate general-purpose images. These methods are typically implemented by generating an countermeasure network, using a random vector, a set of words, a mask pattern, etc. as inputs to the network, using a large number of real images as training data, and obtaining a neural network model by gradient descent optimization, thereby generating images using the model.
The disadvantages of the methods used in academia to generate the countermeasure network are mainly three:
first, this method requires a large number of aircraft images as training data, which are, however, inherently difficult to obtain;
Secondly, the method cannot control the shooting angle of the generated picture, so that the generated picture mode is generally larger in difference from an actual scene;
Thirdly, the method cannot control the shape of the generated airplane picture, and as the airplane is an artificial object, the shape meets certain geometric constraint, such as streamline airplane body, rounded rectangle airplane cabin door and the like. The method is designed aiming at general pictures, such as natural scenes of mountains, clouds and flowers, and cannot meet the geometric constraint of the shape of the airplane picture.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, the invention aims to solve the problems of high difficulty in acquiring available airplane pictures, low richness and high manual labeling cost in airport intelligent service promotion, and provides an airplane image generation method based on three-dimensional rendering.
Another object of the present invention is to propose an aircraft image generating device based on three-dimensional rendering.
In order to achieve the above object, according to one aspect of the present invention, an aircraft image generating method based on three-dimensional rendering is provided, which includes the following steps:
Generating an airplane three-dimensional model with the same shape as an airplane through three-dimensional modeling, acquiring an airplane picture matched with the airplane three-dimensional model, generating a texture map according to the airplane picture, and loading the texture map onto the airplane three-dimensional model to obtain the airplane three-dimensional model with textures;
rendering the textured aircraft three-dimensional model into an aircraft two-dimensional image set, wherein the aircraft two-dimensional image set comprises a plurality of aircraft two-dimensional images rendered under different environment parameters and shooting parameters, and labeling information of aircraft features corresponding to each aircraft two-dimensional image;
and carrying out image enhancement based on a deep neural network according to the plane two-dimensional image set.
According to the aircraft image generation method based on three-dimensional rendering, an aircraft three-dimensional model with the same shape as an aircraft is generated through three-dimensional modeling, aircraft pictures matched with the aircraft three-dimensional model are obtained, texture maps are generated according to the aircraft pictures, the texture maps are loaded on the aircraft three-dimensional model to obtain a textured aircraft three-dimensional model, the textured aircraft three-dimensional model is rendered into an aircraft two-dimensional image set, the aircraft two-dimensional image set comprises a plurality of aircraft two-dimensional images rendered under different environment parameters and shooting parameters, labeling information of aircraft features corresponding to each aircraft two-dimensional image is obtained, and image enhancement is performed based on a depth neural network according to the aircraft two-dimensional image set. The invention greatly improves the quantity and the richness of available airplane pictures and reduces the required manual labeling cost.
In addition, the aircraft image generation method based on three-dimensional rendering according to the above embodiment of the present invention further includes:
Further, the rendering the textured three-dimensional model of the aircraft as a set of two-dimensional images of the aircraft includes:
Finely labeling the aircraft pictures matched with the aircraft three-dimensional model according to different tasks, wherein the labeled content comprises the labeled grading information of the aircraft grading identification task, and the boarding gallery bridge guides corresponding aircraft door slits labeled by alignment tasks;
setting the environment parameters, and setting the shooting parameters in batches through scripts according to different tasks;
and rendering and generating the two-dimensional image of the airplane according to the airplane annotation picture, the environment parameter and the shooting parameter, and generating an annotation file corresponding to the two-dimensional image of the airplane.
Further, the image enhancement according to the aircraft two-dimensional image set and the deep neural network principle comprises:
Constructing a training data set, wherein the training data set comprises the plane two-dimensional image set and style image sets with different shooting conditions;
Inputting the aircraft two-dimensional images in the aircraft two-dimensional image set into a feature extraction network one by one, outputting corresponding content features, inputting the style images in the style image set into the feature extraction network one by one, and outputting corresponding environment features;
and obtaining Shan Zhangbai noise images, namely merging the content characteristics of the plane two-dimensional image and the environmental characteristics of the style image into the white noise image in pairs each time by taking the white noise image as a base image, and obtaining a plane two-dimensional enhanced image set.
Further, the pair-wise combining the content features of the aircraft two-dimensional image and the environmental features of the style image into the white noise image includes:
Inputting the white noise image into the feature extraction network to obtain the content features and the environment features of the white noise image;
Obtaining a content characteristic difference value from the content characteristics of the white noise image and any one of the two-dimensional images of the airplane, obtaining an environment characteristic difference value from the environment characteristics of the white noise image and any one of the style images, and constructing an error function according to the content characteristic difference value and the environment characteristic difference value;
And iteratively optimizing the error function until the white noise image has the selected content characteristics of the two-dimensional aircraft image and the selected environmental characteristics of the style image.
To achieve the above object, another aspect of the present invention provides an aircraft image generating apparatus based on three-dimensional rendering, including:
the model generation module is used for generating an airplane three-dimensional model with the same shape as the airplane through three-dimensional modeling, acquiring an airplane picture matched with the airplane three-dimensional model, generating a texture map according to the airplane picture, and loading the texture map onto the airplane three-dimensional model to obtain a textured airplane three-dimensional model;
the image rendering module is used for rendering the textured airplane three-dimensional model into an airplane two-dimensional image set, wherein the airplane two-dimensional image set comprises a plurality of airplane two-dimensional images rendered under different environment parameters and shooting parameters, and the airplane two-dimensional images correspond to the labeling information of airplane features;
And the image enhancement module is used for carrying out image enhancement based on a depth neural network according to the plane two-dimensional image set.
According to the aircraft image generating device based on three-dimensional rendering, an aircraft three-dimensional model with the same shape as an aircraft is generated through three-dimensional modeling, aircraft pictures matched with the aircraft three-dimensional model are obtained, texture maps are generated according to the aircraft pictures, the texture maps are loaded on the aircraft three-dimensional model to obtain a textured aircraft three-dimensional model, the textured aircraft three-dimensional model is rendered into an aircraft two-dimensional image set, the aircraft two-dimensional image set comprises a plurality of aircraft two-dimensional images rendered under different environment parameters and shooting parameters, and labeling information of aircraft features corresponding to each aircraft two-dimensional image is enhanced based on a depth neural network according to the aircraft two-dimensional image set. The invention greatly improves the quantity and the richness of available airplane pictures and reduces the required manual labeling cost.
The invention has the beneficial effects that:
The method greatly reduces the time and economic cost required by manual marking, greatly improves the number of the aircraft pictures and the environmental richness, and solves the problem that the picture data is difficult to obtain in severe environments such as heavy rain, heavy snow, heavy fog and the like.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a photograph of what would be required in a real-life situation when docking through an aircraft door picture guidance corridor bridge;
FIG. 2 is a flow chart of a method of generating an aircraft image based on three-dimensional rendering according to an embodiment of the invention;
FIG. 3 is a schematic illustration of a three-dimensional model of an aircraft according to an embodiment of the invention;
FIG. 4 is a schematic illustration of an aircraft three-dimensional model after texture mapping in accordance with an embodiment of the present invention;
FIG. 5 is a schematic illustration of marking aircraft door slits in accordance with an embodiment of the present invention;
fig. 6 is a schematic structural view of an aircraft image generating apparatus based on three-dimensional rendering according to an embodiment of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The method and apparatus for generating an aircraft image based on three-dimensional rendering according to the embodiments of the present invention will be described below with reference to the accompanying drawings, and first, the method for generating an aircraft image based on three-dimensional rendering according to the embodiments of the present invention will be described with reference to the accompanying drawings.
FIG. 2 is a flow chart of a method of generating an aircraft image based on three-dimensional rendering in accordance with one embodiment of the invention.
As shown in fig. 2, the three-dimensional rendering-based aircraft image generation method includes the steps of:
Step S1, generating an airplane three-dimensional model with the same shape as the airplane through three-dimensional modeling, acquiring an airplane picture matched with the airplane three-dimensional model, generating a texture map according to the airplane picture, and loading the texture map onto the airplane three-dimensional model to obtain the airplane three-dimensional model with textures.
Specifically, firstly, a three-dimensional modeling method is used for generating three-dimensional models of various airplanes, wherein the three-dimensional models are the same as the shape of a real airplane, but the surfaces of the three-dimensional models are not coated, as shown in fig. 3; then a large number of aircraft pictures are obtained from the Internet, and a texture map is generated for each aircraft picture by using a stencil method; and finally, loading the aircraft texture map onto an aircraft model, and setting physical properties of each part, such as metaliness, roughness and the like. Take the example of generating a picture of the aircraft door area as shown in fig. 4.
It should be noted that the main purpose of step S1 is to generate three-dimensional models of aircraft with different models and different textures. The generated three-dimensional model of the aircraft without textures can provide different models, the different models are embodied in the shape of the identification belt, the shape of the cabin door pedal, the position of the aircraft side window and the like, and the texture map generated by the engraving method can provide different textures, and the different textures are embodied in different colors and different patterns on the cabin, such as characters, painting and the like.
And S2, rendering the textured airplane three-dimensional model into an airplane two-dimensional image set, wherein the airplane two-dimensional image set comprises a plurality of airplane two-dimensional images rendered under different environment parameters and shooting parameters, and labeling information of airplane features corresponding to each airplane two-dimensional image.
It should be appreciated that after the textured three-dimensional model is obtained, it needs to be rendered as a two-dimensional picture for use by the algorithm, while in order to reduce the cost of manual labeling, corresponding labeling information needs to be generated simultaneously.
Specifically, the aircraft pictures obtained in the step S1 are marked finely according to different tasks, corresponding airline company names, aircraft numbers and other frame information are marked if the aircraft frames are identified, corresponding aircraft door slits are marked if the boarding bridge is guided to be aligned, the marking result of the aircraft door slits is shown in FIG. 5, and the green environment-friendly mounting area is the marked aircraft door slits; and then manually setting conditions such as ambient light and the like in a rendering environment, and simultaneously setting a plurality of camera parameters such as the relative positions of the cameras and the aircraft, the shooting angles of the cameras and the like in batches by using scripts according to different tasks, so that a large number of two-dimensional pictures of the aircraft are rendered and generated, and corresponding annotation files can be automatically generated at the same time.
The purpose of step S2 is to render the generated textured three-dimensional model into a two-dimensional image. For each aircraft image collected on the Internet, a three-dimensional model with textures can be generated, each model is marked once, a large number of aircraft two-dimensional images can be generated from one image by setting different environments and camera parameters, and marking information corresponding to the two-dimensional images can be generated simultaneously.
And S3, performing image enhancement based on the deep neural network according to the plane two-dimensional image set.
It should be understood that only a part of illumination conditions can be simply set in the three-dimensional environment, and a complex shooting environment in reality, such as different weather (cloudy and rainy fog), different backgrounds of shooting (such as airport ground, sky), shadows, and the like, cannot be simulated, so that further image enhancement operation is required.
Specifically, the method comprises the steps of carrying out image enhancement based on a depth neural network according to a plane two-dimensional image set, wherein the method comprises the following three steps:
(1) Collecting style images of various shooting conditions on a network, and constructing a training data set;
(2) A pre-training feature extraction network, wherein the network is used for extracting content features of an aircraft image, such as the shape, the size and the like of a cabin door, and simultaneously the network is used for extracting environmental features of a style image, such as rain, snow and the like, and the two features are stored;
(3) Using a white noise image as input, extracting the content features and the environment features of the image by using the feature extraction network in the step (2), differentiating the content features from the content features of the aircraft image, differentiating the environment features from the environment features of the style image, and constructing an error function by using the two differences. The error function is continuously and iteratively optimized, so that the input white noise image is changed to have the content characteristics of the airplane image and the environment characteristics of the style image.
It should be noted that, the purpose of step S3 is to simulate a complex shooting environment in reality, and because it is difficult to simulate complex environmental conditions such as rain and snow in the three-dimensional rendering environment, the step is required to enhance the image of the three-dimensional rendering image, so as to effectively improve the richness of the data set, and further improve the performance of the corresponding algorithm.
Generating an airplane three-dimensional model with the same shape as the airplane through three-dimensional modeling, acquiring airplane pictures matched with the airplane three-dimensional model, generating texture maps according to the airplane pictures, loading the texture maps on the airplane three-dimensional model to obtain a textured airplane three-dimensional model, and rendering the textured airplane three-dimensional model into an airplane two-dimensional image set, wherein the airplane two-dimensional image set comprises a plurality of airplane two-dimensional images rendered under different environment parameters and shooting parameters, labeling information of airplane features corresponding to each airplane two-dimensional image, and carrying out image enhancement based on a depth neural network according to the airplane two-dimensional image set. The invention greatly improves the quantity and the richness of available airplane pictures and reduces the required manual labeling cost.
It should be noted that, the implementation modes of the aircraft image generation method based on three-dimensional rendering are various, but no matter what the specific implementation method is, as long as the method solves the problems of high difficulty in acquiring available aircraft images in airport intelligent service, low richness and high manual labeling cost, the method is a solution to the problems in the prior art and has corresponding effects.
In order to implement the above embodiment, as shown in fig. 6, there is further provided an aircraft image generating apparatus 10 based on three-dimensional rendering, where the apparatus 10 includes: model generation module 100, image rendering module 200, image enhancement module 300.
The model generating module 100 is configured to generate an aircraft three-dimensional model with the same shape as the aircraft through three-dimensional modeling, obtain an aircraft picture matched with the aircraft three-dimensional model, generate a texture map according to the aircraft picture, and load the texture map onto the aircraft three-dimensional model to obtain a textured aircraft three-dimensional model;
The image rendering module 200 is configured to render the textured three-dimensional model of the aircraft into a two-dimensional image set of the aircraft, where the two-dimensional image set of the aircraft includes a plurality of two-dimensional images of the aircraft rendered under different environmental parameters and shooting parameters, and labeling information of aircraft features corresponding to each two-dimensional image of the aircraft;
the image enhancement module 300 is used for carrying out image enhancement based on the depth neural network according to the plane two-dimensional image set.
Specifically, the image rendering module 200 includes:
The marking module is used for finely marking the aircraft pictures matched with the aircraft three-dimensional model according to different tasks, and marking contents comprise the frame information marked by the aircraft frame identification task and corresponding aircraft door slits marked by the boarding gallery bridge guide alignment task;
the setting module is used for setting environmental parameters and setting the shooting parameters in batches through scripts according to different tasks;
the generating module is used for rendering and generating an aircraft two-dimensional image according to the aircraft annotation picture, the environment parameters and the shooting parameters and generating an annotation file corresponding to the aircraft two-dimensional image.
Specifically, the image enhancement module 300 includes:
the building module is used for building a training data set, wherein the training data set comprises an airplane two-dimensional image set and style image sets with different shooting conditions;
The extraction module is used for inputting the two-dimensional images of the airplane in the two-dimensional image set into the feature extraction network one by one, outputting corresponding content features, inputting the style images in the style image set into the feature extraction network one by one, and outputting corresponding environment features;
The merging module is used for acquiring a single Zhang Bai noise image, taking the white noise image as a base image each time, merging the content characteristics of the two-dimensional image of the airplane and the environmental characteristics of the style image into the white noise image in a pair-by-pair manner, and acquiring a two-dimensional enhanced image set of the airplane.
Specifically, the merging module includes:
the feature extraction module is used for inputting the white noise image into the feature extraction network to obtain the content features and the environment features of the white noise image;
The function construction module is used for obtaining a content characteristic difference value from the content characteristics of the white noise image and any aircraft two-dimensional image, obtaining an environment characteristic difference value from the environment characteristics of the white noise image and any style image, and constructing an error function according to the content characteristic difference value and the environment characteristic difference value;
and the iterative optimization module is used for iteratively optimizing the error function until the white noise image has the content characteristics of the selected plane two-dimensional image and the environment characteristics of the style image.
According to the aircraft image generating device based on three-dimensional rendering, an aircraft three-dimensional model with the same shape as an aircraft is generated through three-dimensional modeling, aircraft pictures matched with the aircraft three-dimensional model are obtained, texture maps are generated according to the aircraft pictures, the texture maps are loaded on the aircraft three-dimensional model to obtain a textured aircraft three-dimensional model, the textured aircraft three-dimensional model is rendered into an aircraft two-dimensional image set, the aircraft two-dimensional image set comprises a plurality of aircraft two-dimensional images rendered under different environment parameters and shooting parameters, labeling information of aircraft features corresponding to each aircraft two-dimensional image is obtained, and image enhancement is performed based on a depth neural network according to the aircraft two-dimensional image set. The invention greatly improves the quantity and the richness of available airplane pictures and reduces the required manual labeling cost.
It should be noted that the foregoing explanation of the embodiment of the method for generating an aircraft image based on three-dimensional rendering is also applicable to the apparatus for generating an aircraft image based on three-dimensional rendering of this embodiment, and will not be repeated here.
In one aspect of the present application, a computer device is provided, where the computer device includes a memory, a processor, and a computer program that runs on the memory and that can implement the method provided in one aspect of the embodiments of the present application when the processor executes the computer program.
Furthermore, the present invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method provided by an aspect of an embodiment of the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (8)

1. An aircraft image generation method based on three-dimensional rendering, comprising:
Generating an airplane three-dimensional model with the same shape as an airplane through three-dimensional modeling, acquiring an airplane picture matched with the airplane three-dimensional model, generating a texture map according to the airplane picture, and loading the texture map onto the airplane three-dimensional model to obtain the airplane three-dimensional model with textures;
Rendering the textured aircraft three-dimensional model into an aircraft two-dimensional image set, wherein the aircraft two-dimensional image set comprises a plurality of aircraft two-dimensional images rendered under different environment parameters and shooting parameters, and labeling information of aircraft features corresponding to each aircraft two-dimensional image;
Performing image enhancement based on a deep neural network according to the plane two-dimensional image set;
the image enhancement based on the depth neural network according to the plane two-dimensional image set comprises the following steps:
Constructing a training data set, wherein the training data set comprises the plane two-dimensional image set and style image sets with different shooting conditions;
Inputting the aircraft two-dimensional images in the aircraft two-dimensional image set into a feature extraction network one by one, outputting corresponding content features, inputting the style images in the style image set into the feature extraction network one by one, and outputting corresponding environment features;
and obtaining Shan Zhangbai noise images, namely merging the content characteristics of the plane two-dimensional image and the environmental characteristics of the style image into the white noise image in pairs each time by taking the white noise image as a base image, and obtaining a plane two-dimensional enhanced image set.
2. The method of claim 1, wherein the rendering the textured three-dimensional model of the aircraft as a set of two-dimensional images of the aircraft comprises:
Finely labeling the aircraft pictures matched with the aircraft three-dimensional model according to different tasks, wherein the labeled content comprises the labeled grading information of the aircraft grading identification task, and the boarding gallery bridge guides corresponding aircraft door slits labeled by alignment tasks;
setting the environment parameters, and setting the shooting parameters in batches through scripts according to different tasks;
And rendering and generating the two-dimensional image of the airplane according to the airplane annotation picture, the environment parameter and the shooting parameter, and generating an annotation file corresponding to the two-dimensional image of the airplane.
3. The method of claim 1, wherein the merging the content features of the aircraft two-dimensional image and the environmental features of the style image into the white noise image, one by one, comprises:
Inputting the white noise image into the feature extraction network to obtain the content features and the environment features of the white noise image;
Obtaining a content characteristic difference value from the content characteristics of the white noise image and any one of the two-dimensional images of the airplane, obtaining an environment characteristic difference value from the environment characteristics of the white noise image and any one of the style images, and constructing an error function according to the content characteristic difference value and the environment characteristic difference value;
And iteratively optimizing the error function until the white noise image has the selected content characteristics of the two-dimensional aircraft image and the selected environmental characteristics of the style image.
4. An aircraft image generation device based on three-dimensional rendering, comprising:
the model generation module is used for generating an airplane three-dimensional model with the same shape as the airplane through three-dimensional modeling, acquiring an airplane picture matched with the airplane three-dimensional model, generating a texture map according to the airplane picture, and loading the texture map onto the airplane three-dimensional model to obtain a textured airplane three-dimensional model;
the image rendering module is used for rendering the textured airplane three-dimensional model into an airplane two-dimensional image set, wherein the airplane two-dimensional image set comprises a plurality of airplane two-dimensional images rendered under different environment parameters and shooting parameters, and the airplane two-dimensional images correspond to the labeling information of airplane features;
The image enhancement module is used for carrying out image enhancement based on a depth neural network according to the plane two-dimensional image set;
wherein, the image enhancement module includes:
The building module is used for building a training data set which comprises the plane two-dimensional image set and style image sets with different shooting conditions;
The extraction module is used for inputting the aircraft two-dimensional images in the aircraft two-dimensional image set into the feature extraction network one by one, outputting corresponding content features, inputting the style images in the style image set into the feature extraction network one by one, and outputting corresponding environment features;
And the merging module is used for acquiring a single Zhang Bai noise image, taking the white noise image as a base image each time, merging the content characteristics of the plane two-dimensional image and the environmental characteristics of the style image into the white noise image in pairs, and acquiring a plane two-dimensional enhanced image set.
5. The apparatus of claim 4, wherein the image rendering module comprises:
The marking module is used for finely marking the aircraft pictures matched with the aircraft three-dimensional model according to different tasks, and the marking content comprises the frame information marked by the aircraft frame identification task and the corresponding aircraft door gap marked by the boarding bridge guide alignment task;
the setting module is used for setting the environment parameters and setting the shooting parameters in batches through scripts according to different tasks;
The generation module is used for rendering and generating the two-dimensional image of the airplane according to the airplane annotation picture, the environment parameter and the shooting parameter, and generating an annotation file corresponding to the two-dimensional image of the airplane.
6. The apparatus of claim 5, wherein the combining module comprises:
the feature extraction module is used for inputting the white noise image into the feature extraction network to obtain the content features and the environment features of the white noise image;
The function construction module is used for obtaining a content characteristic difference value from the content characteristics of the white noise image and any one of the two-dimensional images of the airplane, obtaining an environment characteristic difference value from the environment characteristics of the white noise image and any one of the style images, and constructing an error function according to the content characteristic difference value and the environment characteristic difference value;
And the iterative optimization module is used for iteratively optimizing the error function until the white noise image has the selected content characteristics of the plane two-dimensional image and the environment characteristics of the style image.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of claims 1-3 when executing the computer program.
8. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the method according to any of claims 1-3.
CN202210211000.0A 2022-03-04 2022-03-04 Aircraft image generation method and device based on three-dimensional rendering Active CN114782620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210211000.0A CN114782620B (en) 2022-03-04 2022-03-04 Aircraft image generation method and device based on three-dimensional rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210211000.0A CN114782620B (en) 2022-03-04 2022-03-04 Aircraft image generation method and device based on three-dimensional rendering

Publications (2)

Publication Number Publication Date
CN114782620A CN114782620A (en) 2022-07-22
CN114782620B true CN114782620B (en) 2024-06-18

Family

ID=82422959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210211000.0A Active CN114782620B (en) 2022-03-04 2022-03-04 Aircraft image generation method and device based on three-dimensional rendering

Country Status (1)

Country Link
CN (1) CN114782620B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764082A (en) * 2018-05-17 2018-11-06 淘然视界(杭州)科技有限公司 A kind of Aircraft Targets detection method, electronic equipment, storage medium and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101437747B1 (en) * 2012-12-27 2014-09-05 한국항공우주연구원 Apparatus and method for image processing
CN112613350A (en) * 2020-12-04 2021-04-06 河海大学 High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN113256778B (en) * 2021-07-05 2021-10-12 爱保科技有限公司 Method, device, medium and server for generating vehicle appearance part identification sample

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764082A (en) * 2018-05-17 2018-11-06 淘然视界(杭州)科技有限公司 A kind of Aircraft Targets detection method, electronic equipment, storage medium and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"飞机三维显示跟踪的过程设计与仿真";谢国平;《舰船电子工程》;20130920;全文 *

Also Published As

Publication number Publication date
CN114782620A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
EP3660787A1 (en) Training data generation method and generation apparatus, and image semantics segmentation method therefor
CN109816725A (en) A kind of monocular camera object pose estimation method and device based on deep learning
CN110160502A (en) Map elements extracting method, device and server
EP3792827B1 (en) Systems and methods for automatically generating training image sets for an object
US20100156919A1 (en) Systems and methods for text-based personalization of images
CN109816784B (en) Method and system for three-dimensional reconstruction of human body and medium
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
US20220237908A1 (en) Flight mission learning using synthetic three-dimensional (3d) modeling and simulation
CN112504271A (en) System and method for automatically generating training image sets for an environment
CN114972646B (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN114782620B (en) Aircraft image generation method and device based on three-dimensional rendering
CN112258621B (en) Method for observing three-dimensional rendering two-dimensional animation in real time
CN114445467A (en) Specific target identification and tracking system of quad-rotor unmanned aerial vehicle based on vision
CN116612091A (en) Construction progress automatic estimation method based on multi-view matching
CN113656918B (en) Four-rotor simulation test method applied to finished product overhead warehouse scene
CN114491694B (en) Space target data set construction method based on illusion engine
CN107221027A (en) A kind of method that User Defined content is embedded in oblique photograph threedimensional model
Becker et al. Lidar inpainting from a single image
Motayyeb et al. Enhancing contrast of images to improve geometric accuracy of a UAV photogrammetry project
CN112465697A (en) Offshore foggy day image simulation method
Paszkuta et al. Uav on-board emergency safe landing spot detection system combining classical and deep learning-based segmentation methods
CN111292417A (en) Machine scene three-dimensional visual simulation method
Fukuda et al. Optical integrity of diminished reality using deep learning
Cannan et al. Synthetic AI training data generation enabling airfield damage assessment
CN113256650B (en) Image segmentation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant