CN115797442B - Simulation image reinjection method of target position and related equipment thereof - Google Patents

Simulation image reinjection method of target position and related equipment thereof Download PDF

Info

Publication number
CN115797442B
CN115797442B CN202211538620.1A CN202211538620A CN115797442B CN 115797442 B CN115797442 B CN 115797442B CN 202211538620 A CN202211538620 A CN 202211538620A CN 115797442 B CN115797442 B CN 115797442B
Authority
CN
China
Prior art keywords
camera
target
cameras
vehicle
parameter matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211538620.1A
Other languages
Chinese (zh)
Other versions
CN115797442A (en
Inventor
方志刚
陈奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunyi Electronic Technology Shanghai Co Ltd
Original Assignee
Kunyi Electronic Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunyi Electronic Technology Shanghai Co Ltd filed Critical Kunyi Electronic Technology Shanghai Co Ltd
Priority to CN202211538620.1A priority Critical patent/CN115797442B/en
Publication of CN115797442A publication Critical patent/CN115797442A/en
Application granted granted Critical
Publication of CN115797442B publication Critical patent/CN115797442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application relates to a simulation image reinjection method of a target position and related equipment thereof, wherein the method comprises the following steps: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle; determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera corresponding to at least one path of second cameras; and (3) reinjecting the simulation image to a data processing unit of the second vehicle. According to the application, the first camera data of the first vehicle is utilized to form the simulation image at the target position of the second vehicle, so that the second camera data can be further enriched, the adaptability is enhanced, and the application breadth is expanded.

Description

Simulation image reinjection method of target position and related equipment thereof
Technical Field
The application relates to the technical field of computers, in particular to a simulation image reinjection method of a target position and related equipment thereof.
Background
At present, unmanned technology is vigorously developed, and data information collected in real situations is most depended on. In the algorithm development process, the data acquired under the actual condition is used for reinjection (namely injection) to the controller, so that the effect of the algorithm can be verified, and the efficiency of algorithm development and verification is improved.
Specifically, an algorithm (e.g., a machine-learned neural network) is provided in the controller. After the algorithm development is completed, the camera on the vehicle may inject the acquired video data into the controller. The algorithm of the controller can process the collected video data to obtain an output result, thereby realizing various functions such as target recognition.
In the process of algorithm development, the algorithm (such as a neural network) needs to be trained and verified, at this time, various video data need to be injected into the algorithm of the controller, and the data source of the video data can be truly acquired video data or simulated video data. However, in the prior art, when video data is required to be injected into a controller of a new vehicle, the camera of the new vehicle is required to actually acquire real video data, or simulated video data is formed by specially aiming at the camera of the new vehicle, and when video data is injected into the new vehicle, the video data is often not much, and the training and verification effects cannot be well achieved.
Disclosure of Invention
In view of this, the present application provides a method for reinjection of a simulation image of a target location and related apparatus thereof, which can utilize first camera data of a first vehicle to form a simulation image of a target location of a second vehicle, further enrich second camera data of the second vehicle, enhance adaptability of the second camera data, and expand application breadth of the second camera data.
According to an aspect of the present application, there is provided a simulated image reinjection method of a target location, the method comprising: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprises an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the positions of the target cameras are the same as the preset target positions on the second vehicle, the positions of the target positions are different from the positions of each second camera, and the second camera data comprises real images shot by the external parameter matrix of the second camera and the second camera; determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which are respectively corresponding to the at least one path of second cameras; and reinjecting the simulation image to a data processing unit of the second vehicle.
According to still another aspect of the present application, there is provided a simulated image reinjection apparatus of a target camera, the simulated image reinjection apparatus of the target camera including: a camera data acquisition module, configured to acquire first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, where the first camera data includes an external parameter matrix of each first camera, the first camera set includes a preset target camera, the second camera set includes a plurality of second cameras, a position of the target camera is the same as a preset target position on the second vehicle, the target position is different from a position of each second camera, and the second camera data includes an external parameter matrix of the second camera and a real image captured by the second camera; the camera determining module is used for determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; the image forming module is used for forming a simulation image matched with the target position based on the real image and the depth map corresponding to each of the at least one path of second cameras and the first camera data corresponding to the target camera; and the reinjection module is used for reinjecting the simulation image to the data processing unit of the second vehicle.
According to the method, the first camera data of the first camera set of the first vehicle and the second camera data of the second camera set of the second vehicle are obtained, then at least one path of second cameras adjacent to the target position is determined according to the external parameter matrix of the target camera and the external parameter matrix of each second camera, further a simulation image matched with the target position is formed based on the real image, the depth map and the first camera data corresponding to the target camera of each path of second cameras, and finally the simulation image is reinjected to the data processing unit of the second vehicle.
Drawings
The technical solution and other advantageous effects of the present application will be made apparent by the following detailed description of the specific embodiments of the present application with reference to the accompanying drawings.
FIG. 1 shows a flow chart of a simulated image reinjection method of a target location according to an embodiment of the present application.
Fig. 2 shows a schematic diagram of a target position and a target camera according to an embodiment of the application.
Fig. 3 shows a block diagram of a simulated image reinjection apparatus of a target camera according to an embodiment of the present application.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the description of the present application, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically connected, electrically connected or can be communicated with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements or interaction relationship between the two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
The following disclosure provides many different embodiments, or examples, for implementing different features of the application. In order to simplify the present disclosure, components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present application. Furthermore, the present application may repeat reference numerals and/or letters in the various examples, which are for the purpose of brevity and clarity, and which do not themselves indicate the relationship between the various embodiments and/or arrangements discussed. In addition, the present application provides examples of various specific processes and materials, but one of ordinary skill in the art will recognize the application of other processes and/or the use of other materials. In some instances, well known methods, procedures, components, and circuits have not been described in detail so as not to obscure the present application.
FIG. 1 shows a flow chart of a simulated image reinjection method of a target location according to an embodiment of the present application. As shown in fig. 1, the simulation image reinjection method of the target position of the present application may include:
Step S1: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprises an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the positions of the target cameras are the same as the preset target positions on the second vehicle, the positions of the target positions are different from the positions of each second camera, and the second camera data comprises real images shot by the external parameter matrix of the second camera and the second camera;
The plurality of first cameras can be mounted on a first vehicle, and the first vehicle can be one or a plurality of first vehicles. Accordingly, the plurality of second cameras may be mounted on a second vehicle, and there may be one or more second vehicles. It will be appreciated that the application is not limited to the number of first vehicles and second vehicles.
The first camera data may be a data set, and the first camera data may include parameters such as an external parameter matrix of each of the first cameras and an internal parameter matrix of each of the first cameras. Similar to the first camera data, the second camera data may be a data set, and the second camera data may include parameters such as an external parameter matrix of each of the second cameras and an internal parameter matrix of each of the second cameras. It should be noted that, the first camera data may further include an original image captured by each of the first cameras, and the second camera data may further include a real image captured by each of the second cameras.
For example, the first vehicle may be a vehicle of an old model, the second vehicle may be a vehicle of a new model, and the first camera data and the second camera data may each be stored in a corresponding database.
The external parameter matrix of the target camera and the internal parameter matrix of the target camera may be acquired simultaneously or sequentially. The external parameter matrix may include rotation parameters and translation parameters converted from the world coordinate system to the camera coordinate system for converting coordinate points of the world coordinate system to coordinate points of the camera coordinate system; the internal parameter matrix may include parameters of the camera itself, such as a focal length of the camera, etc., which may be used to convert coordinate points of the camera coordinate system to coordinate points of the pixel coordinate system. The internal parameter matrix is typically fixed, while the external parameter matrix may be related to the position, orientation, etc. parameters of the camera.
Fig. 2 shows a schematic diagram of a target position and a target camera according to an embodiment of the application.
As shown in fig. 2, a plurality of first cameras may be mounted on a first vehicle, and a plurality of second cameras may be mounted on a second vehicle. Among the plurality of first cameras, one of the plurality of first cameras may be taken as a target camera that exists on the first vehicle, and a corresponding camera does not exist on the second vehicle at the same position as the target camera.
Specifically, the target camera is located at the same position as a target location preset on the second vehicle, and the target location is different from each of the second cameras. In an embodiment of the present application, there is no camera in the second vehicle that is at the same location as the target camera of the first vehicle. Thus, a simulation image that may be taken in the case of installing a second camera at the target position may be simulated using a number of second cameras adjacent to the target position.
The camera data of the first vehicle cannot be directly used for the second vehicle because the position of the first camera installed on the first vehicle is different from the position of the second camera installed on the second vehicle, or the model of the first camera installed on the first vehicle is different from the model of the second camera installed on the second vehicle, so that the viewing angle of the first camera installed on the first vehicle is different from the viewing angle of the second camera installed on the second vehicle. And under the condition that the position of the second camera is the same as that of the first camera, the first camera data corresponding to the first camera can be directly transplanted to the second camera which is the same as that of the first camera.
Step S2: determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
in the embodiment of the application, since the camera which is the same as the target camera position is not present in the second vehicle, at least one path of second camera near the target position of the second vehicle is required to simulate a simulation image of the target position so as to be used for the image which is shot after the camera is supposed to be installed at the target position. It is noted that, since the external parameter matrix reflects the view angle information of the camera, the participation of the internal parameter matrix is not needed in the process of determining at least one path of second camera adjacent to the target position, so that the selection efficiency of the at least one path of second camera is improved.
Further, determining at least one second camera adjacent to the target location according to the external parameter matrix of the target camera and the external parameter matrix of each second camera may include:
Step S21: extracting a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera through an external parameter matrix of the target camera;
For example, the first vehicle has C1, C2...c N first cameras mounted thereon, N being a positive integer. For each first camera, an external parameter matrix for the corresponding first camera may be acquired. For example, for the target camera C1, the first translation parameter, the second translation parameter, and the third translation parameter in the external parameter matrix corresponding to the target camera may be denoted as t x(C1)、ty(C1)、tz(C1), which represent the translation parameters of the target object photographed by the target camera from the world coordinate system to the camera coordinate system.
Step S22: respectively extracting fourth translation parameters, fifth translation parameters and sixth translation parameters corresponding to the second cameras through an external parameter matrix of each second camera;
The fourth translation parameter, the fifth translation parameter, and the sixth translation parameter may be t x、ty、tz in an external parameter matrix of each of the second cameras, respectively, and represent translation parameters of the target object photographed by any one of the second cameras from the world coordinate system to the camera coordinate system.
Step S23: based on fourth translation parameters, fifth translation parameters and sixth translation parameters corresponding to the second cameras and first translation parameters, second translation parameters and third translation parameters corresponding to the target cameras, respectively calculating Euclidean distances between the second cameras and the target cameras, and obtaining a plurality of camera distances corresponding to the second cameras;
For example, for the target camera C1 on the first vehicle and any one of the second cameras on the second vehicle, the translation parameters corresponding to the two cameras may be subjected to square root operation to obtain the euclidean distance corresponding to the second camera. Specifically, the first translation parameter and the fourth translation parameter can be subtracted and squared, the second translation parameter and the fifth translation parameter are added and subtracted and squared, the third translation parameter and the sixth translation parameter are added and squared, and the total addition result is given a root number, so that the camera distance corresponding to the second camera can be obtained. Similarly, the respective camera distances corresponding to the other second cameras can be obtained.
Step S24: and selecting at least one path of second cameras lower than a preset camera distance threshold as at least one path of second cameras adjacent to the target position.
The camera distance threshold may be set as needed. The at least one path of second cameras can be n paths of second cameras, and n is an integer. Optionally, n may be set to 2 or 3, and 2 paths of second cameras or 3 paths of second cameras close to the target camera are selected as references.
By determining at least one path of second cameras close to the target position by using the external parameter matrix of the target camera and the external parameter matrix of each second camera, the embodiment of the application can improve the accuracy of camera data migration of different vehicles and reduce the calculated data quantity.
Step S3: forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which are respectively corresponding to the at least one path of second cameras;
Further, before forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which are respectively corresponding to the at least one path of second camera, the simulation image reinjection method of the target position comprises the following steps:
Step S301: and processing the real images shot by the at least one path of second cameras based on an SFM algorithm to generate depth maps corresponding to the at least one path of second cameras.
Wherein the second camera data may be a data set comprising a plurality of real images taken by the respective second cameras. In step S301, the at least one path of second camera may be found in the second camera data by the number of the second camera, and then a plurality of real images captured by the at least one path of second camera may be directly extracted.
The depth map may be used to characterize a distance between a target object photographed by a target camera and the target camera, i.e., target depth information. For example, for the second vehicle, a target camera is installed at the left front of the second vehicle, the linear distance between the traffic light shot by the target camera and the camera is 10m, the linear distance between the pedestrian shot by the target camera and the target camera is 15m, and at this time, both the traffic light and the pedestrian can be used as the target object. The target object may have one or more, and the distance of the target object from the second vehicle camera may be specifically measured based on the geometric center of the target object and the optical center of the camera.
In practical applications, the coordinates of the target object photographed by the target camera may be calibrated using a world coordinate system. The origin of the world coordinate system is irrelevant to the specific position of the target camera and can be selected according to actual needs. In general, the coordinates of the target object in the world coordinate system cannot be directly projected onto the two-dimensional plane image, and further conversion is required. For example, coordinates in the world coordinate system may be converted into the camera coordinate system using an external parameter matrix, and then corresponding coordinates in the camera coordinate system are converted into the pixel coordinate system using an internal parameter matrix. For example, the origin of the camera coordinate system may be the optical center of the camera and the origin of the pixel coordinate system may be located in the upper left corner of the captured image.
The depth information can be obtained through radar detection or binocular vision. It will be appreciated that there are a variety of ways to obtain the depth information, and the application is not limited thereto.
The SFM algorithm is also called a motion structure reconstruction (Structure From Motion) algorithm, and can reconstruct a three-dimensional structure from a series of two-dimensional image sequences containing visual motion information.
The method comprises the steps of selecting two adjacent real images in the plurality of real images to calculate based on an SFM algorithm, matching characteristic points in different real images, calculating a corresponding basic matrix and an eigenvalue matrix, and reconstructing depth maps corresponding to the at least one path of second cameras according to the basic matrix and the eigenvalue matrix. Each of the at least one second camera corresponds to a depth map. For any one of the at least one second camera, the depth map corresponding to the second camera may include a distance between the target object and the second camera.
Further, forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, wherein the real image and the depth map respectively correspond to the at least one path of second cameras, and the simulation image comprises:
Step S31: acquiring an internal parameter matrix of each second camera in the at least one path of second cameras;
In step S31, an internal parameter matrix of each of the at least one second camera may be extracted from the second camera data.
Step S32: based on an external parameter matrix of each second camera in the at least one path of second cameras, an internal parameter matrix of each second camera, a depth map and a real image corresponding to each at least one path of second cameras, projecting pixel points of the real image from a pixel coordinate system to a world coordinate system, and obtaining a plurality of first pixel points under the world coordinate system;
wherein the target object may include a plurality of feature points, each of which may correspond to a pixel point on an image photographed based on the target object. Each feature point has a first pixel point in the world coordinate system. In practical application, the feature points corresponding to the depth maps may be translated to a world coordinate system.
Because of the difference of the visual angles of the first camera and the target camera and the fact that the pixel points of the pixel coordinate system cannot be directly transformed to the coordinate points of the world coordinate system, each depth map is translated to the world coordinate system, so that a transformation process of projecting the pixel points of the plurality of real images from the pixel coordinate system to the world coordinate system is realized through each depth map. In other words, without a depth map, the transformation of the pixel points of the plurality of real images projected from the pixel coordinate system to the world coordinate system cannot be achieved; and under the condition of combining the external parameter matrix of each second camera in the at least one path of second cameras, the internal parameter matrix of each second camera and the depth map corresponding to each at least one path of second cameras, the pixel points of the plurality of real images can be projected from a pixel coordinate system to a world coordinate system.
Step S33: and forming a simulation image matched with the target position based on the first pixel points and the first camera data corresponding to the target camera.
Further, forming a simulation image matched with the target position based on the plurality of first pixel points and first camera data corresponding to the target camera, including:
Step S331: generating an external parameter matrix of the target position by using the external parameter matrix of the target camera;
in one example, the external parameter matrix of the target camera may be directly used as the external parameter matrix of the target position.
Step S332: according to the external parameter matrix of the target position, part or all of the first pixel points are projected to a camera coordinate system corresponding to the target camera, so that a plurality of second pixel points under the camera coordinate system are obtained;
In one example, the conversion relationship between the first pixel point and the second pixel point can be expressed as follows by equation (1):
Wherein X, Y, Z respectively represent a first pixel point of the target object in the world coordinate system; x c、Yc、Zc represents second pixel points of the target object under the camera coordinate system respectively; r 11-r33 in the external parameter matrix represent rotation parameters of the target object transformed from the world coordinate system to the camera coordinate system; t x、ty、tz in the external parameter matrix represents the translation parameters of the target object transformed from the world coordinate system to the camera coordinate system, respectively.
Since the origin of the world coordinate system is not coincident with the origin of the camera coordinate system, when a point of the world coordinate system is required to be projected onto the image plane, the world coordinate system needs to be converted into the camera coordinate system by using an external parameter matrix, and the external parameter matrix characterizes rotation and translation in the conversion process. Of course, since the target object may include a plurality of feature points, the actual processing object of the formula (1) may be the feature point of the target object. From the viewpoint of the pixel coordinate system, the feature points may be respective pixel points in the captured image.
Step S333: and forming a simulation image matched with the target position based on the second pixel points and the first camera data corresponding to the target camera.
Wherein the transformation process of formula (1) can be regarded as part of the forward transformation process, and the process of step S32 can be referred to as the inverse transformation process.
In the present application, the three coordinate systems, i.e., the world coordinate system, the camera coordinate system, and the pixel coordinate system, are mainly used to calibrate the positions of the target object and the target camera. It will be appreciated by those skilled in the art that there are other possible variations of the coordinate transformation, and the application is not limited to transformation between coordinate systems.
Further, according to the external parameter matrix of the target position, projecting part or all of the first pixel points to a camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points under the camera coordinate system, including:
step S3321: acquiring a visual angle range of the target camera;
The view angle range of the target camera may be the maximum view angle that the target camera can observe. For example, the maximum angle of view that the target camera can observe may be 120 degrees in the case where the target camera is mounted in the front left of the vehicle, and 180 degrees in the case where the target camera is mounted in the front right of the vehicle. The range of view of the target camera at different locations may be different. In other words, the maximum viewing angle that the target camera can observe may be related to the coordinates of the target camera itself.
Step S3322: judging whether each first pixel point in the plurality of first pixel points is positioned in the view angle range;
Step S3323: if the first pixel point is positioned in the visual angle range, projecting the first pixel point to a camera coordinate system corresponding to the target position; and if the first pixel point is positioned outside the view angle range, filtering the first pixel point.
It should be noted that, the view angle range of the target camera may be related to factors such as the installation position of the camera itself and the performance of the camera, and the view angle range of the target camera is limited. For example, the view angle range of the target camera may capture a range of 120 degrees in the horizontal direction and 120 degrees in the vertical direction. Therefore, when each first pixel point is projected, it is necessary to determine whether each first pixel point in the plurality of first pixel points is located within the viewing angle range.
Wherein the target object can be selected according to the requirement. And the target object corresponding to the first image shot by the first camera is the same as the target object corresponding to the real image shot by the second camera. The first camera and the second camera can both shoot the same target object from different view angles.
Under the condition that the first pixel point is located in the view angle range, the first pixel point can be directly projected to a camera coordinate system corresponding to the target position; and filtering out the first pixel points outside the visual angle range under the condition that the first pixel points are outside the visual angle range. By judging whether each first pixel point in the plurality of first pixel points is located in the view angle range or not, the embodiment of the application can reduce the coordinate data amount in the coordinate projection process and improve the generation efficiency of the simulation graph.
Further, the first camera data further includes an internal parameter matrix of each of the first cameras, and forming a simulation image matched with the target position based on the plurality of second pixel points and the first camera data corresponding to the target camera, including:
step S3331: acquiring an internal parameter matrix of the target camera;
In step S3331, an internal parameter matrix of the target camera may be extracted from the first camera data.
Step S3332: generating an internal parameter matrix of the target position by using the internal parameter matrix of the target camera;
In one example, the internal parameter matrix of the target camera may be directly used as the internal parameter matrix of the target location.
Step S3333: according to the internal parameter matrix of the target position, each second pixel point in the plurality of second pixel points is projected to a pixel coordinate system corresponding to the target position, and a plurality of third pixel points under the pixel coordinate system are obtained;
In one example, the conversion relationship between the second pixel point and the third pixel point can be expressed as follows by equation (2):
Wherein, X c、Yc、Zc represents the second pixel point of the target object under the camera coordinate system respectively; x and y respectively represent a third pixel point of the target object under a pixel coordinate system; c x、cy in the internal parameter matrix respectively represents pixel coordinates corresponding to an origin of a camera coordinate system on an image; f x、fy in the internal parameter matrix may represent the camera focal length.
In addition, in the process of converting the second pixel point to the third pixel point through the internal parameter matrix, distortion factors may also be considered. For example, radial distortion coefficients and tangential distortion coefficients are added to the internal parameter matrix. The distortion factors are added, so that the problem that the theoretically calculated pixel points are offset and deformed from the pixel points in actual conditions can be solved. In practical application, whether the distortion factor is added or not can be determined according to practical needs, and the application is not limited.
It should be noted that, the formula (2) is based on the principle of camera projection. The transformation process of equation (2) can also be considered part of the forward transformation process. In the present application, the first pixel point may be coordinates of the target object in the real world, the coordinates being three-dimensional; the second pixel point may be an intermediate coordinate, which is also three-dimensional; the third pixel point may be a coordinate of the target object in the photographed image, the coordinate being two-dimensional.
Step S3334: and forming a simulation image matched with the target position based on the third pixel points.
Wherein the simulated image is available to the second vehicle. For example, the simulation image may be input into a data processing unit of the second vehicle so that the data processing unit performs training, verification, testing, or the like using the simulation image.
It should be noted that, in the process of processing each depth map, the directions of the external parameters of the virtual view angles of the target positions are assumed to be identical, and the virtual cameras participating in the target positions and the target cameras of the external parameters r11-r33 are considered to be identical, so that only the translational gap of the view angles is considered, and the rotational difference of the view angles is not considered. For easy understanding, it may also be considered that the at least one second camera is a camera closer to the target position, so that the orientation of each second camera in the at least one second camera is similar to the orientation of the target camera, and the positions on the left, right, upper and lower sides deviate to cause a viewing angle deviation, so that translation is only required in the process of processing each depth map.
Step S4: and reinjecting the simulation image to a data processing unit of the second vehicle.
Further, reinjecting the simulated image to a data processing unit of the second vehicle, comprising:
step S41: filling missing pixel points in the simulation image by adopting a bilinear difference algorithm and/or an image restoration algorithm to obtain a fitting image corresponding to the target position;
Wherein, each pixel point of the simulation image may have a fixed RGB value (e.g., gray scale). The missing or damaged pixels may be repaired using a bilinear difference algorithm and/or an image repair (IMAGE INPAINTING) algorithm. By way of example, the simulated image may also be complemented according to semantic information. The image restoration algorithm may be based on generating a countermeasure network (GENERATIVE ADVERSARIAL Networks, GAN).
It should be noted that, the bilinear difference algorithm and the image restoration algorithm may be performed simultaneously, or alternatively. Those skilled in the art will appreciate that there are a variety of implementations of the bilinear difference algorithm and the image restoration algorithm, and the application is not limited to the specific implementation of the bilinear difference algorithm and the image restoration algorithm.
Step S42: and reinjecting the fitting image to a data processing unit of the second vehicle.
Wherein, the fitting image can be one or a plurality of fitting images. And under the condition that the fitting images are multiple, the fitting images can be further fused, so that the fitting images at the target positions are closer to the actual situation.
Further, reinjecting the fitted image to a data processing unit of the second vehicle may include:
Step S421: inputting the formed fitting image matched with the target camera to a data processing unit of a second vehicle for neural network training to obtain a training value corresponding to the target position;
For example, the second vehicle may include an industrial personal computer and an injection device. The fitted image can be decoded as video data by an industrial personal computer (also called a real-time machine) and then injected into a data processing unit by an injection device (such as a video injection board card). The algorithm of the data processing unit can be based on a neural network model, and the neural network model takes the fitting image as input to carry out neural network training, so as to obtain a training value corresponding to the target position.
Step S422: comparing the training value with a true value acquired by a camera installed at the target position to obtain a comparison result corresponding to the training value;
In one example, the training values may be compared to the actual values collected by the cameras mounted at the target locations, which may be equal or unequal. Under the condition that the comparison results are equal, the fitted image is explained to be capable of better fitting the image actually acquired by the camera arranged at the target position; and under the condition that the comparison results are unequal, the fact that the fitting image cannot fit the image actually acquired by the camera installed at the target position well is indicated, and the neural network model needs to be readjusted.
Step S423: and adjusting a neural network model in the data processing unit according to the comparison result.
Wherein the neural network model may comprise a network model such as DNN, CNN, LSTM, resNet, the application is not limited as to the type of neural network model.
In one embodiment, since there is usually more overlapping portions between the simulation image (or the fitted image) and the real image, the simulation image (or the fitted image) and the real image collected by each second camera can be spliced together based on the overlapping portions to be used as an image to be verified, and meanwhile, there is also likely to be more overlapping portions between each real image, so that the real images collected by each second camera can be spliced together based on the overlapping portions to be used as a reference image; and then comparing the similarity of the image to be checked and the reference image to obtain similarity evaluation information for representing the similarity. For example, if the similarity evaluation information is positively correlated with the similarity, the simulation image may be reinjected when the similarity evaluation information is higher than the similarity threshold, otherwise, the reinjecting is not performed; if the similarity evaluation information is in negative correlation with the similarity, the simulation image can be reinjected when the similarity evaluation information is lower than a similarity threshold value, otherwise, the reinjecting is not performed. Furthermore, the simulated image (or fitted image) may be reinjected in synchronization with other real images.
By the scheme for judging whether to reinject the simulation image or the fitting image based on the similarity, the embodiment of the application can avoid reinjecting the image with poor simulation or fitting results (such as poor accuracy), and solve the problem that the reinjections possibly cause mismatching of the synchronously reinjected images observed by the data processing unit, thereby improving the training, verifying and testing effects of the algorithm.
It should be noted that, the above scheme may be mainly used in a case where filling is not required for a pixel point missing in a simulation image, and is not excluded from being applied to a case where there is a missing pixel point.
In one example, there may be a combination of pixel fill and no missing pixel fill. At this time, since the difference between images when there is a defect may be caused by a defect of an algorithm adopted at the time of filling or other related defects, different similarity thresholds may be adopted for different situations. For example, for the case of missing pixels, if the missing pixels need to be filled (i.e., the image to be injected is a fitting image), the similarity threshold may be determined to be a first similarity threshold, and if the missing pixels need to be filled (i.e., the image to be injected is a simulation image), the similarity threshold may be determined to be a second similarity threshold. Further, if the similarity evaluation information is positively correlated with the similarity, the first similarity threshold is smaller than the second similarity threshold, and if the similarity evaluation information is negatively correlated with the similarity, the first similarity threshold is larger than the second similarity threshold.
In summary, the application fits the simulation image of the target position visual angle by using the coordinate mapping relation and the depth map and adjusting the first camera data and the second camera data, so that the first camera data can be adapted to the second vehicle, the second camera data of the second vehicle is further enriched, the adaptability of the second camera data is enhanced, and the application range of the second camera data is expanded. In addition, the simulation image after the adaptation can be used for neural network training, and then the training precision of neural network is improved, and simultaneously, new motorcycle types are quickly adapted based on first camera data and second camera data, and research and development and testing efficiency of the new motorcycle types are also improved.
Fig. 3 shows a block diagram of a simulated image reinjection apparatus of a target camera according to an embodiment of the present application.
As shown in fig. 3, the simulated image reinjection apparatus 30 of the target camera according to the embodiment of the present application may include:
A camera data obtaining module 31, configured to obtain first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, where the first camera data includes an external parameter matrix of each first camera, the first camera set includes a preset target camera, the second camera set includes a plurality of second cameras, a position of the target camera is the same as a preset target position on the second vehicle, the target position is different from a position of each second camera, and the second camera data includes an external parameter matrix of the second camera and a real image captured by the second camera;
a camera determining module 32, configured to determine at least one path of second cameras adjacent to the target location according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
an image forming module 33, configured to form a simulation image matched with the target position based on the real image, the depth map, and the first camera data corresponding to the target camera, where the real image and the depth map correspond to the at least one second camera respectively;
And the reinjection module 34 is used for reinjecting the simulation image to the data processing unit of the second vehicle.
Furthermore, the present application provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a simulated image reinjection method of the target location.
Further, the present application also provides an electronic device, including: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize a simulation image reinjection method of the target position.
Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 4, the electronic device may be used to implement a simulated image reinjection method of the target location. In particular, the electronic device may comprise a computer system. It should be noted that the electronic device shown in fig. 3 is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present application.
As shown in fig. 3, the computer system includes a central processing unit (Central Processing Unit, CPU) 1801, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1802 or a program loaded from a storage portion 1808 into a random access Memory (Random Access Memory, RAM) 1803. In the RAM 1803, various programs and data required for system operation are also stored. The CPU 1801, ROM 1802, and RAM 1803 are connected to each other via a bus 1804. An Input/Output (I/O) interface 1805 is also connected to the bus 1804.
The following components are connected to the I/O interface 1805: an input section 1806 including a keyboard, a mouse, and the like; an output portion 1807 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage section 1808 including a hard disk or the like; and a communication section 1809 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1809 performs communication processing via a network such as the internet. The drive 1810 is also connected to the I/O interface 1805 as needed. Removable media 1811, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memory, and the like, is installed as needed on drive 1810 so that a computer program read therefrom is installed as needed into storage portion 1808.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1809, and/or installed from the removable medium 1811. When executed by a Central Processing Unit (CPU) 1801, performs various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The above describes the simulation image reinjection method of the target position and the related equipment thereof in detail, and specific examples are applied to describe the principle and implementation of the application, and the description of the above examples is only used for helping to understand the technical scheme and core idea of the application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (10)

1. A simulated image reinjection method of a target location, the method comprising:
Acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprises an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the positions of the target cameras are the same as the preset target positions on the second vehicle, the target positions are different from the positions of each second camera, the second camera data comprises real images shot by the external parameter matrix of the second camera and the second camera, the first vehicle is a vehicle of an old vehicle type, and the second vehicle is a vehicle of a new vehicle type;
determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
forming a simulation image matched with the target position based on the real image, the depth map and the first camera data corresponding to the target camera, which are respectively corresponding to the at least one path of second cameras;
And reinjecting the simulation image to a data processing unit of the second vehicle.
2. The simulated image reinjection method of a target location according to claim 1, wherein determining at least one second camera adjacent to the target location based on an external parameter matrix of the target camera and an external parameter matrix of each of the second cameras, comprises:
extracting a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera through an external parameter matrix of the target camera;
respectively extracting fourth translation parameters, fifth translation parameters and sixth translation parameters corresponding to the second cameras through an external parameter matrix of each second camera;
Based on fourth translation parameters, fifth translation parameters and sixth translation parameters corresponding to the second cameras and first translation parameters, second translation parameters and third translation parameters corresponding to the target cameras, respectively calculating Euclidean distances between the second cameras and the target cameras, and obtaining a plurality of camera distances corresponding to the second cameras;
And selecting at least one path of second cameras lower than a preset camera distance threshold as at least one path of second cameras adjacent to the target position.
3. The simulated image reinjection method of a target location according to claim 1, characterized in that before forming a simulated image matching the target location based on the real image, the depth map, and the first camera data corresponding to the target camera, respectively, of the at least one second camera, the simulated image reinjection method of the target location comprises:
and processing the real images shot by the at least one path of second cameras based on an SFM algorithm to generate depth maps corresponding to the at least one path of second cameras.
4. The method of claim 1, wherein forming a simulated image matching the target location based on the real image, the depth map, and the first camera data corresponding to the target camera for each of the at least one second camera, comprises:
acquiring an internal parameter matrix of each second camera in the at least one path of second cameras;
based on an external parameter matrix of each second camera in the at least one path of second cameras, an internal parameter matrix of each second camera, a depth map and a real image corresponding to each at least one path of second cameras, projecting pixel points of the real image from a pixel coordinate system to a world coordinate system, and obtaining a plurality of first pixel points under the world coordinate system;
and forming a simulation image matched with the target position based on the first pixel points and the first camera data corresponding to the target camera.
5. The method of claim 4, wherein forming a simulated image matching the target location based on the plurality of first pixel points and first camera data corresponding to the target camera, comprises:
generating an external parameter matrix of the target position by using the external parameter matrix of the target camera;
According to the external parameter matrix of the target position, part or all of the first pixel points are projected to a camera coordinate system corresponding to the target camera, so that a plurality of second pixel points under the camera coordinate system are obtained;
And forming a simulation image matched with the target position based on the second pixel points and the first camera data corresponding to the target camera.
6. The method for reinjection of a simulation image of a target position according to claim 5, wherein projecting a part or all of the first pixel points to a camera coordinate system corresponding to the target camera according to an external parameter matrix of the target position to obtain a plurality of second pixel points in the camera coordinate system, comprises:
acquiring a visual angle range of the target camera;
judging whether each first pixel point in the plurality of first pixel points is positioned in the view angle range;
If the first pixel point is positioned in the visual angle range, projecting the first pixel point to a camera coordinate system corresponding to the target position; and if the first pixel point is positioned outside the view angle range, filtering the first pixel point.
7. The method of claim 5, wherein the first camera data further includes an internal parameter matrix of each of the first cameras, forming a simulated image matching the target location based on the plurality of second pixel points and the first camera data corresponding to the target camera, comprising:
acquiring an internal parameter matrix of the target camera;
generating an internal parameter matrix of the target position by using the internal parameter matrix of the target camera;
According to the internal parameter matrix of the target position, each second pixel point in the plurality of second pixel points is projected to a pixel coordinate system corresponding to the target position, and a plurality of third pixel points under the pixel coordinate system are obtained;
and forming a simulation image matched with the target position based on the third pixel points.
8. The simulated image reinjection method of the target location according to claim 1, characterized in that reinjecting the simulated image to a data processing unit of the second vehicle comprises:
filling missing pixel points in the simulation image by adopting a bilinear difference algorithm and/or an image restoration algorithm to obtain a fitting image corresponding to the target position;
and reinjecting the fitting image to a data processing unit of the second vehicle.
9. A simulated image reinjection device for a target location, the simulated image reinjection device for the target location comprising:
the camera data acquisition module is used for acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle, wherein the first camera data comprises an external parameter matrix of each first camera, the first camera set comprises a preset target camera, the second camera set comprises a plurality of second cameras, the positions of the target cameras are the same as the preset target positions on the second vehicle, the positions of the target cameras are different from the positions of each second camera, the second camera data comprises an external parameter matrix of the second camera and a real image shot by the second camera, the first vehicle is a vehicle of an old vehicle type, and the second vehicle is a vehicle of a new vehicle type;
the camera determining module is used for determining at least one path of second cameras adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera;
The image forming module is used for forming a simulation image matched with the target position based on the real image and the depth map corresponding to each of the at least one path of second cameras and the first camera data corresponding to the target camera;
And the reinjection module is used for reinjecting the simulation image to the data processing unit of the second vehicle.
10. An electronic device, comprising:
One or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the simulated image reinjection method of a target location according to any one of claims 1 to 8.
CN202211538620.1A 2022-12-01 2022-12-01 Simulation image reinjection method of target position and related equipment thereof Active CN115797442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211538620.1A CN115797442B (en) 2022-12-01 2022-12-01 Simulation image reinjection method of target position and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211538620.1A CN115797442B (en) 2022-12-01 2022-12-01 Simulation image reinjection method of target position and related equipment thereof

Publications (2)

Publication Number Publication Date
CN115797442A CN115797442A (en) 2023-03-14
CN115797442B true CN115797442B (en) 2024-06-07

Family

ID=85444986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211538620.1A Active CN115797442B (en) 2022-12-01 2022-12-01 Simulation image reinjection method of target position and related equipment thereof

Country Status (1)

Country Link
CN (1) CN115797442B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034555A1 (en) * 2015-04-03 2016-10-07 Continental Automotive France METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE
CN113868873A (en) * 2021-09-30 2021-12-31 重庆长安汽车股份有限公司 Automatic driving simulation scene expansion method and system based on data reinjection
CN114299230A (en) * 2021-12-21 2022-04-08 中汽创智科技有限公司 Data generation method and device, electronic equipment and storage medium
CN114723820A (en) * 2022-03-09 2022-07-08 福思(杭州)智能科技有限公司 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment
CN114821497A (en) * 2022-02-24 2022-07-29 广州文远知行科技有限公司 Method, device and equipment for determining position of target object and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3034555A1 (en) * 2015-04-03 2016-10-07 Continental Automotive France METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE
CN113868873A (en) * 2021-09-30 2021-12-31 重庆长安汽车股份有限公司 Automatic driving simulation scene expansion method and system based on data reinjection
CN114299230A (en) * 2021-12-21 2022-04-08 中汽创智科技有限公司 Data generation method and device, electronic equipment and storage medium
CN114821497A (en) * 2022-02-24 2022-07-29 广州文远知行科技有限公司 Method, device and equipment for determining position of target object and storage medium
CN114723820A (en) * 2022-03-09 2022-07-08 福思(杭州)智能科技有限公司 Road data multiplexing method, driving assisting system, driving assisting device and computer equipment

Also Published As

Publication number Publication date
CN115797442A (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN110349251B (en) Three-dimensional reconstruction method and device based on binocular camera
US11501507B2 (en) Motion compensation of geometry information
KR100793838B1 (en) Appratus for findinng the motion of camera, system and method for supporting augmented reality in ocean scene using the appratus
Saurer et al. Rolling shutter stereo
US8755630B2 (en) Object pose recognition apparatus and object pose recognition method using the same
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN112116639B (en) Image registration method and device, electronic equipment and storage medium
KR20100119559A (en) Method and system for converting 2d image data to stereoscopic image data
CN112991515B (en) Three-dimensional reconstruction method, device and related equipment
CN116051747A (en) House three-dimensional model reconstruction method, device and medium based on missing point cloud data
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
CN115578296B (en) Stereo video processing method
CN116433843A (en) Three-dimensional model reconstruction method and device based on binocular vision reconstruction route
CN116579962A (en) Panoramic sensing method, device, equipment and medium based on fisheye camera
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN114463408A (en) Free viewpoint image generation method, device, equipment and storage medium
CN115797442B (en) Simulation image reinjection method of target position and related equipment thereof
CN109741245B (en) Plane information insertion method and device
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
CN111178501B (en) Optimization method, system, electronic equipment and device for dual-cycle countermeasure network architecture
CN111489439B (en) Three-dimensional line graph reconstruction method and device and electronic equipment
Veldandi et al. Video stabilization by estimation of similarity transformation from integral projections
Wu et al. 3d reconstruction from public webcams
CN109379577B (en) Video generation method, device and equipment of virtual viewpoint
Peng et al. Projective reconstruction with occlusions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant