CN111242842A - Image conversion method, terminal and storage medium - Google Patents

Image conversion method, terminal and storage medium Download PDF

Info

Publication number
CN111242842A
CN111242842A CN202010044761.2A CN202010044761A CN111242842A CN 111242842 A CN111242842 A CN 111242842A CN 202010044761 A CN202010044761 A CN 202010044761A CN 111242842 A CN111242842 A CN 111242842A
Authority
CN
China
Prior art keywords
image
coordinates
pixel points
view image
conversion table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010044761.2A
Other languages
Chinese (zh)
Other versions
CN111242842B (en
Inventor
罗年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongtian Anchi Co ltd
Original Assignee
Shenzhen Zhongtian Anchi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongtian Anchi Co ltd filed Critical Shenzhen Zhongtian Anchi Co ltd
Priority to CN202010044761.2A priority Critical patent/CN111242842B/en
Publication of CN111242842A publication Critical patent/CN111242842A/en
Application granted granted Critical
Publication of CN111242842B publication Critical patent/CN111242842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image conversion method, a terminal and a storage medium, wherein the image conversion method comprises the following steps: calibrating the camera to obtain a perspective transformation matrix; obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera; establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image; and acquiring a second image in real time through the camera, and acquiring a second aerial view image of the second image according to the second image and the first coordinate conversion table. The image conversion is realized by a table look-up method, the image conversion time is saved, and the real-time requirement is met.

Description

Image conversion method, terminal and storage medium
Technical Field
The invention relates to the technical field of automatic driving and auxiliary driving of automobiles, in particular to an image conversion method, a terminal and a storage medium.
Background
At present, in the field of automatic driving and assistant driving of automobiles, a camera-based positioning method mainly comprises the steps of shooting an original image through a camera and obtaining a bird's-eye view image through image conversion, and positioning and analyzing an automobile according to the original image and the bird's-eye view image.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an image conversion method, a terminal and a storage medium, and aims to solve the problems of low image conversion speed and low real-time performance in an automatic driving and assistant driving system of an automobile.
In order to achieve the above object, the image conversion method provided by the present invention comprises the following steps:
calibrating the camera to obtain a perspective transformation matrix;
obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera;
establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
and acquiring a second image in real time through the camera, and acquiring a second aerial view image of the second image according to the second image and the first coordinate conversion table.
Optionally, the step of obtaining a corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the coordinates of the pixel points in the first image acquired by the camera, and the preset resolution includes:
obtaining world coordinates corresponding to the coordinates of the pixel points in the first image according to the perspective transformation matrix and the coordinates of the pixel points in the first image;
obtaining coordinates of pixel points in the first aerial view image according to world coordinates corresponding to the coordinates of the pixel points in the first image and preset resolution;
and obtaining the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image.
Optionally, the step of establishing a first coordinate conversion table according to the correspondence between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image includes:
establishing a first initial coordinate conversion table according to the coordinates of the pixel points in the first aerial view image and the total number of the pixel points, wherein the positions of all cells in the first initial coordinate conversion table correspond to the coordinates of the pixel points in the first aerial view image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first aerial view image, taking the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table;
and taking a first initial coordinate conversion table with each unit cell having a corresponding numerical value as a first coordinate conversion table.
Optionally, the step of using the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image includes:
converting the coordinates of the pixel points in the first image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that A is ((u &0xffff) < <16) | (v &0xffff), (u, v) is the coordinates of the pixel points in the first image, and A is the hexadecimal value corresponding to the coordinates (u, v) of the pixel points in the first image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first image as the numerical value of the corresponding cell in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image.
Optionally, the step of acquiring a second image in real time by using a camera, and obtaining a second bird's-eye view image corresponding to the second image according to the second image and the established first coordinate conversion table includes:
inquiring coordinates of pixel points in the second image corresponding to the coordinates of the pixel points in the second bird's-eye view image of the second image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the second aerial view image according to the coordinates of the pixel points in the second image corresponding to the coordinates of the pixel points in the second aerial view image and the pixel values corresponding to the pixel points in the second image;
and obtaining a second aerial view image according to the coordinates of each pixel point in the second aerial view image and the corresponding pixel value.
Optionally, after the step of obtaining a correspondence between the coordinates of the pixel point in the first bird's-eye view image and the coordinates of the pixel point in the first image according to the perspective transformation matrix, the preset resolution, and the coordinates of the pixel point in the first image acquired by the camera, the method further includes:
establishing a second conversion numerical value table according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
acquiring a third aerial view image;
inquiring and obtaining coordinates of pixel points in the third aerial view image corresponding to the coordinates of the pixel points in the original image of the third aerial view image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the original image according to the coordinates of the pixel points in the third bird's-eye view image corresponding to the coordinates of the pixel points in the obtained original image and the pixel values corresponding to the pixel points in the third bird's-eye view image;
and obtaining the original image according to the coordinates of the pixel points in the original image and the corresponding pixel values.
Optionally, the step of creating a second conversion value table according to the coordinates of each pixel point in the first bird's-eye view image and the coordinates of each pixel point in the first image includes:
establishing a second initial coordinate conversion table according to the coordinates of the pixel points in the first image and the total number of the pixel points, wherein the positions of all cells in the second initial coordinate conversion table correspond to the coordinates of the pixel points in the first image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image, taking the coordinates of the pixel points in the first aerial view image as the numerical values of the corresponding cells in the second initial coordinate conversion table;
and taking a second initial coordinate conversion table with each unit cell having a corresponding numerical value as a second coordinate conversion table.
Optionally, the step of using the coordinates of the pixel points in the first bird's-eye view image as the numerical values of the corresponding cells in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image includes:
converting the coordinates of the pixel points in the first bird's-eye view image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that B is ((w &0xffff) < <16) | (h &0xffff), (w, h) is the coordinates of the pixel points in the first image, and B is the hexadecimal value corresponding to the coordinates (w, h) of the pixel points in the first bird's-eye view image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first bird's-eye view image as the numerical value of the corresponding cell in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image.
To achieve the above object, the present invention also proposes a terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the image conversion method as described above when executing the program.
To achieve the above object, the present invention further proposes a storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the image conversion method as described above.
In the image conversion method, a perspective transformation matrix is obtained by calibrating a camera; obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera; establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image; the second image is collected in real time through the camera, the second aerial view image of the second image is obtained according to the second image and the first coordinate conversion table, rapid conversion is achieved through a table look-up method when the original image is converted into the aerial view image, time consumption during image mutual conversion is effectively reduced, and performance of an automatic driving and auxiliary driving system is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image transformation method according to a first embodiment of the present invention;
FIG. 3 is a detailed flowchart of step S200 in the first embodiment of the image conversion method according to the present invention;
FIG. 4 is a flowchart illustrating a detailed process of step S300 in the first embodiment of the image transforming method according to the present invention;
FIG. 5 is a flowchart illustrating a refinement of step S400 in the first embodiment of the image conversion method according to the present invention;
fig. 6 is a flowchart illustrating an image conversion method according to a second embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image conversion method, a terminal and a storage medium.
As shown in fig. 1, the method of the present invention is applicable to a terminal, which may be an automobile. The terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may comprise a touch-sensitive pad, touch screen, keyboard, and the optional user interface 1003 may also comprise a standard wired, wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001. The terminal further comprises a camera, and the camera is used for acquiring images of a preset scene area in real time.
Optionally, the terminal may further include an RF (Radio Frequency) circuit, an audio circuit, a WiFi module, and the like. Of course, the terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a thermometer, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a computer program.
In the terminal shown in fig. 1, the processor 1001 may be configured to call an image conversion program stored in the memory 1005 and perform the following operations:
calibrating the camera to obtain a perspective transformation matrix;
obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera;
establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
and acquiring a second image in real time through the camera, and acquiring a second aerial view image of the second image according to the second image and the first coordinate conversion table.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
obtaining world coordinates corresponding to the coordinates of the pixel points in the first image according to the perspective transformation matrix and the coordinates of the pixel points in the first image;
obtaining coordinates of pixel points in the first aerial view image according to world coordinates corresponding to the coordinates of the pixel points in the first image and preset resolution;
and obtaining the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
establishing a first initial coordinate conversion table according to the coordinates of the pixel points in the first aerial view image and the total number of the pixel points, wherein the positions of all cells in the first initial coordinate conversion table correspond to the coordinates of the pixel points in the first aerial view image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first aerial view image, taking the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table;
and taking a first initial coordinate conversion table with each unit cell having a corresponding numerical value as a first coordinate conversion table.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
converting the coordinates of the pixel points in the first image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that A is ((u &0xffff) < <16) | (v &0xffff), (u, v) is the coordinates of the pixel points in the first image, and A is the hexadecimal value corresponding to the coordinates (u, v) of the pixel points in the first image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first image as the numerical value of the corresponding cell in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
inquiring coordinates of pixel points in the second image corresponding to the coordinates of the pixel points in the second bird's-eye view image of the second image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the second aerial view image according to the coordinates of the pixel points in the second image corresponding to the coordinates of the pixel points in the second aerial view image and the pixel values corresponding to the pixel points in the second image;
and obtaining a second aerial view image according to the coordinates of each pixel point in the second aerial view image and the corresponding pixel value.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
establishing a second conversion numerical value table according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
acquiring a third aerial view image;
inquiring and obtaining coordinates of pixel points in the third aerial view image corresponding to the coordinates of the pixel points in the original image of the third aerial view image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the original image according to the coordinates of the pixel points in the third bird's-eye view image corresponding to the coordinates of the pixel points in the obtained original image and the pixel values corresponding to the pixel points in the third bird's-eye view image;
and obtaining the original image according to the coordinates of the pixel points in the original image and the corresponding pixel values.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
establishing a second initial coordinate conversion table according to the coordinates of the pixel points in the first image and the total number of the pixel points, wherein the positions of all cells in the second initial coordinate conversion table correspond to the coordinates of the pixel points in the first image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image, taking the coordinates of the pixel points in the first aerial view image as the numerical values of the corresponding cells in the second initial coordinate conversion table;
and taking a second initial coordinate conversion table with each unit cell having a corresponding numerical value as a second coordinate conversion table.
Further, the processor 1001 may call the computer program stored in the memory 1005, and also perform the following operations:
converting the coordinates of the pixel points in the first bird's-eye view image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that B is ((w &0xffff) < <16) | (h &0xffff), (w, h) is the coordinates of the pixel points in the first image, and B is the hexadecimal value corresponding to the coordinates (w, h) of the pixel points in the first bird's-eye view image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first bird's-eye view image as the numerical value of the corresponding cell in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image.
Based on the above hardware structure, various embodiments of the image conversion method in the present application are proposed.
Referring to fig. 2, a first embodiment of the present invention provides an image conversion method including:
step S100, calibrating a camera to obtain a perspective transformation matrix;
in this embodiment, at least one camera is installed on the vehicle in front of the vehicle, but it is needless to say that cameras may be installed on other installation positions on the vehicle, and the number of cameras and the installation positions may be changed. For example, four wide-angle cameras are erected at the front, the rear, the left and the right of the vehicle, and comprise a front camera, a rear camera, a left camera and a right camera, so that the cameras can cover all the visual field areas around the vehicle. For example, the front camera is arranged above the exhaust fan of the vehicle and is positioned at the center of the width of the vehicle, and the shooting angle of the front camera is a scene which is inclined downwards and towards the outside of the vehicle body; the right camera is arranged below the right rear view mirror, and the shooting angle of the right camera is obliquely downward to the outside of the vehicle body; the left camera is arranged below the left rearview mirror, and the shooting angle of the left camera is obliquely downward to the outside of the vehicle body; the rear camera is arranged above the license plate and is positioned in the center of the width of the vehicle, and the shooting angle of the rear camera is a scene which is inclined downwards and outwards from the vehicle body. The adopted cameras are wide-angle cameras with the visual fields larger than 180 degrees, so that the situation that the cameras collect can effectively cover the 360-degree visual field area around the vehicle body, and a guarantee is provided for subsequent panoramic splicing. Of course, in other embodiments, the number of cameras and the installation positions may be changed as long as the cameras cover a field area of 360 ° around the vehicle body.
Before a first image is collected through a camera, the camera is calibrated to obtain a perspective transformation matrix.
Step S200, obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected through the camera;
the corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image is obtained through the perspective transformation matrix obtained after the camera is calibrated, the preset resolution and the coordinates of the pixel points in the first image collected through the camera.
It should be noted that, before the step of obtaining the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image acquired by the camera, because the acquisition range of the camera is wider and farther, and the range that needs to be converted into the bird's-eye view in the first original image acquired by the camera is limited, for example, a road image acquired by the camera, wherein a part of the area in the first original image is not belonging to a road but belongs to areas at two sides of the road, and the areas do not provide useful information for identifying a road scene, and the other areas in the first original image reflect a place farther away from a vehicle, and have blurred pixels and provide less useful information, so that after the first original image is acquired by the camera, the obtained first original image may be clipped according to a preset coordinate range, the clipped image is used as the first image collected by the camera, for example, the obtained image is 640 × 480, the upper left corner of the image is used as the origin, the preset coordinate range includes an abscissa range and an ordinate range, the abscissa range is 80-560, the ordinate range is 320-.
Specifically, referring to fig. 3, fig. 3 is a detailed schematic view of a flow of step S200 in the embodiment of the present application, based on the embodiment, the step S200 specifically includes:
step S210, obtaining world coordinates corresponding to the coordinates of the pixel points in the first image according to the perspective transformation matrix and the coordinates of the pixel points in the first image;
step S220, obtaining coordinates of pixel points in the first aerial view image according to world coordinates corresponding to the coordinates of the pixel points in the first image and preset resolution;
step S230, obtaining a corresponding relationship between the coordinates of the pixel points in the first bird 'S-eye view image and the coordinates of the pixel points in the first image according to the coordinates of the pixel points in the first bird' S-eye view image and the coordinates of the pixel points in the first image.
Firstly, converting the coordinates of each pixel point in the first image by the obtained perspective transformation matrix to obtain world coordinates corresponding to the coordinates of each pixel point in the first image, for example, the obtained perspective transformation formula is as follows:
Figure BDA0002368335090000101
the coordinates of the pixel points of the first image are (u, v), and the world coordinates corresponding to the coordinates of the pixel points of the first image are (u, v) are (i, j).
After obtaining world coordinates corresponding to coordinates of each pixel point in the first image, the world coordinates corresponding to coordinates of each pixel point in the first image are converted into coordinates of each pixel point in the first bird's eye view image according to a world coordinate corresponding to coordinates of each pixel point in the first image and a preset resolution, for example, the preset resolution includes a horizontal resolution and a vertical resolution, the horizontal resolution and the vertical resolution are respectively 50 and 200, the world coordinates corresponding to coordinates (330, 280) of a certain pixel point in the obtained first image are (5000, 6000), and the coordinates of the certain pixel point in the first image corresponding to coordinates (330, 280) of the pixel point in the first image are obtained by dividing the horizontal direction coordinate values and the vertical direction coordinate values in the world coordinates by the horizontal resolution and the vertical resolution, respectively (100, 30).
The coordinates of each pixel point in the first bird's-eye view image are obtained by converting the coordinates of the pixel points in the first image through the perspective transformation matrix and the preset resolution, namely, the coordinates of each pixel point in the first bird's-eye view image respectively correspond to the coordinates of the only one pixel point in the first image, so that the corresponding relation between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image is obtained according to the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image.
In the first bird's eye view image, the coordinates of a plurality of pixels may be present corresponding to the coordinates of one of the same pixels in the first image.
Step S300, establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
after the terminal acquires the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image, a first coordinate conversion table is established according to the relation, and the first coordinate conversion table is used for rapidly inquiring the coordinates of the pixel points in the original image corresponding to the aerial view image to be generated according to the coordinates of the pixel points in the aerial view image to be generated.
Specifically, referring to fig. 4, fig. 4 is a detailed schematic view of a flow of step S300 in the embodiment of the present application, based on the embodiment, the step S300 specifically includes:
step S310, establishing a first initial coordinate conversion table according to the coordinates of the pixel points in the first aerial view image and the total number of the pixel points, wherein the positions of all cells in the first initial coordinate conversion table correspond to the coordinates of the pixel points in the first aerial view image one by one;
step S320, taking the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird 'S-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird' S-eye view image;
in step S330, a first initial coordinate conversion table with each cell having a corresponding numerical value is used as a first coordinate conversion table.
The terminal firstly constructs a first initial coordinate conversion table according to the coordinates and the total number of pixels in the first bird's-eye view image, wherein the total number of cells of the first initial coordinate conversion table is the same as the total number of pixels of the first bird's-eye view image, the total number of cells of each row of the first initial coordinate conversion table is the same as the number of pixels in the horizontal direction in the first bird's-eye view image, the total number of cells of each column of the first initial coordinate conversion table is the same as the number of pixels in the vertical direction in the first bird's-eye view image, for example, the first bird's-eye view image is 128 x 100, the total number of pixels is 12800, the total number of cells of the first initial coordinate conversion table is 12800, the total number of cells of each row of the first initial coordinate conversion table is 128, and the total number of cells of each column of the. And the position of the cells in the first initial coordinate conversion table, that is, the row and column where the cells are located, corresponds to the coordinates of the pixels in the first bird's-eye view image one by one, for example, the coordinates of the pixels in the first bird's-eye view image corresponding to the cell in row 100 and column 30 in the first initial coordinate conversion table are (100, 30).
Since each cell in the first initial coordinate conversion table established by the terminal is not assigned, after the terminal establishes the first initial coordinate conversion table, the coordinates of the pixel points in the first bird's-eye view image corresponding to each cell in the first initial coordinate conversion table are determined according to the corresponding relationship between the positions of each cell in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image, then the coordinates of the pixel points in the first image corresponding to each cell in the first initial coordinate conversion table are determined according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image, and finally the coordinates of the pixel points in the first image are used as the values of the corresponding cells in the first initial coordinate conversion table, for example, the coordinates of the pixel points in the first bird's-eye view image corresponding to the cell in row 100 and column 30 in the first initial coordinate conversion table are (100, 30) and the coordinates (100, 30) of the pixel point in the first bird's eye view image correspond to the coordinates (330, 280) of the pixel point in the first image, so the cell of the 100 th row and 30 th column in the first initial coordinate conversion table corresponds to the coordinates (330, 280) of the pixel point in the first image, and the terminal takes the coordinates (330, 280) of the pixel point as the numerical value in the cell of the 100 th row and 30 th column in the first initial coordinate conversion table.
And after the terminal gives a numerical value to each unit cell in the first initial coordinate conversion table, taking the first initial coordinate conversion table with each unit cell having a corresponding numerical value as the first coordinate conversion table, namely completing the establishment of the first coordinate conversion table.
And step S400, acquiring a second image in real time through the camera, and obtaining a second aerial view image of the second image according to the second image and the first coordinate conversion table.
After the first coordinate conversion table is established at the terminal, after a subsequent terminal acquires a second image in real time through a camera, coordinates of pixel points in the second image corresponding to the pixel points in the second bird's-eye view image of the second image are inquired according to the first coordinate conversion table, then pixel values of the pixel points corresponding to the coordinates in the second image are acquired according to the inquired coordinates of the pixel points in the second image and serve as the pixel values of the corresponding pixel points in the second bird's-eye view image, and finally, certain pixel values are given to the pixel points in the second bird's-eye view image, so that the second bird's-eye view image of the second image is finally acquired.
Specifically, referring to fig. 5, fig. 5 is a detailed schematic view of a flow of step S400 in the embodiment of the present application, based on the embodiment, the step S400 specifically includes:
step S410, coordinates of pixel points in the second image corresponding to the coordinates of the pixel points in the second bird' S-eye view image of the second image are inquired and obtained from the first coordinate conversion table;
step S420, obtaining pixel values corresponding to the pixel points in the second bird 'S-eye view image according to the coordinates of the pixel points in the second image corresponding to the coordinates of the pixel points in the obtained second bird' S-eye view image and the pixel values corresponding to the pixel points in the second image;
step S430, obtaining a second bird 'S-eye view image according to the coordinates of each pixel point in the second bird' S-eye view image and the corresponding pixel value.
The terminal queries coordinates of pixel points in the second image corresponding to coordinates of each pixel point in the second bird's-eye view image of the second image from the first coordinate conversion table, then obtains pixel values of the pixel points corresponding to the coordinates in the second image according to the coordinates of the pixel points in the second image corresponding to the coordinates of each pixel point of the second bird's-eye view image, and finally obtains the pixel values corresponding to the pixel points in the second bird's-eye view image by taking the pixels as the pixel values of the corresponding pixel points in the second bird's-eye view image; and the terminal composes a second aerial view image of the second image according to the coordinates of each pixel point in the second aerial view image and the pixel value corresponding to each pixel point. For example, the coordinates (100, 30) of the pixel point in the second bird's eye view image correspond to the cell in the 100 th row and 30 th column in the first coordinate conversion table, the numerical value of the cell in the 100 th row and 30 th column in the first coordinate conversion table is looked up as (330, 280), the coordinates (100, 30) of the pixel point in the second bird's eye view image correspond to the coordinates (330, 280) of the pixel point in the second image, and the pixel value of the pixel point in the second image with the coordinates (330, 280) is taken as the pixel value of the pixel point in the second bird's eye view image (100, 30).
In the embodiment, a perspective transformation matrix is obtained by calibrating the camera; obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera; establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image; and acquiring a second image in real time through the camera, and acquiring a second aerial view image of the second image according to the second image and the first coordinate conversion table. The method realizes the rapid conversion by a table look-up method when the original image is converted into the aerial view, effectively reduces the time consumption when the images are converted with each other, and greatly improves the performance of the automatic driving and auxiliary driving system.
Further, a second embodiment is proposed based on the first embodiment, referring to fig. 6, fig. 6 is a detailed flowchart of step S320, in this embodiment, the step S320 includes:
step S321, converting coordinates of the pixel points in the first image into corresponding hexadecimal values according to a preset shift algorithm, where the preset shift algorithm is that (u &0xffff) < <16) | (v &0xffff), (u, v) are coordinates of the pixel points in the first image, and a is the hexadecimal value corresponding to the coordinates (u, v) of the pixel points in the first image;
in step S322, according to the correspondence between the coordinates of the pixel points in the first bird 'S-eye view image and the coordinates of the pixel points in the first image and the correspondence between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird' S-eye view image, the hexadecimal values corresponding to the coordinates of the pixel points in the first image are used as the numerical values of the corresponding cells in the first initial coordinate conversion table.
Because each cell in the first coordinate conversion table directly stores the coordinates of the pixel points in the first image, the coordinates are represented by two numerical values, and the storage capacity is larger, and particularly, the more the total number of the pixel points of the first bird's-eye view image of the first image is, the more the total number of the cells in the first coordinate conversion table is, the larger the storage capacity is. In order to reduce the storage amount of the first initial coordinate conversion table, the present example converts the coordinates of the pixel points in the first image to be stored into a numerical value, and then takes the numerical value as the numerical value of the corresponding cell, thereby reducing the storage amount thereof. The specific process is as follows: and converting the coordinates of the pixel points in the first image into corresponding hexadecimal values according to a preset displacement algorithm A ═ ((u &0xffff) < <16) | (v &0xffff), wherein (u, v) is the coordinates of the pixel points in the first image, and A is the hexadecimal value corresponding to the coordinates (u, v) of the pixel points in the first image. For example, if the coordinate of a certain pixel in the first image is (330, 280), the coordinate is converted into a hexadecimal value 0x014a0118 by a displacement algorithm a ═ ((330&0xffff) < <16| (280&0xffff)) -0 x014a 0118.
The terminal determines the coordinates of the pixel points in the first bird's-eye view image corresponding to each cell in the first initial coordinate conversion table according to the corresponding relationship between the position of each cell in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image, then determines the coordinates of the pixel points in the first image corresponding to each cell in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image, and finally takes the hexadecimal values corresponding to the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table, for example, the coordinates of the pixel points in the first bird's-eye view image corresponding to the cell in row 100 and column 30 in the first initial coordinate conversion table are (100, 30), and the coordinates of the pixel points in the first bird's-eye view image (100, 30) correspond to the coordinates of the pixel points in the first image are (330, 280) therefore, the cell in the 100 th row and 30 th column in the first initial coordinate conversion table corresponds to the coordinate of the pixel in the first image as (330, 280), and the terminal uses the hexadecimal value 0x014a0118 corresponding to the coordinate (330, 280) of the pixel as the value in the cell in the 100 th row and 30 th column in the first initial coordinate conversion table.
It should be noted that, if the numerical value of each cell in the first coordinate conversion table is the hexadecimal value of the coordinate of the corresponding pixel in the first image, the process of subsequently querying and obtaining the coordinate of the pixel in the second image corresponding to the coordinate of the pixel in the second bird's eye view image of the second image from the first coordinate conversion table includes: and inquiring and obtaining a hexadecimal value corresponding to the coordinates of the pixel points in the second bird's-eye view image of the second image from the first coordinate conversion table, and then converting the hexadecimal value into the coordinates of the pixel points in the second image according to a preset shift inverse operation, wherein the shift inverse operation u is (A > >16) &0xffff, and v is A &0xffff, wherein A is the numerical value of the cell in the first coordinate conversion table, and (u, v) is the coordinates of the pixel points in the second image. For example, the hexadecimal value corresponding to the coordinates (100, 30) of the pixel point in the second bird's eye view image of the second image is 0x014a0118, u ═ 0xffff ═ 0x014a ═ 330, v ═ 0x014a0118&0xffff ═ 0x0118 ═ 280, and finally the coordinates (330, 280) of the pixel point in the second bird's eye view image corresponding to the pixel point in the second image are obtained.
Further, a third embodiment is proposed based on the second embodiment, and in this embodiment, after the step S200, the method further includes:
step S500, establishing a second conversion numerical value table according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
after the terminal acquires the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image, a second coordinate conversion table is established according to the relation, and the second coordinate conversion table is used for rapidly inquiring the coordinates of the pixel points in the aerial view image corresponding to the original image to be converted according to the coordinates of the pixel points in the original image to be converted.
Specifically, based on the above embodiment, the step S500 specifically includes:
step S510, establishing a second initial coordinate conversion table according to the coordinates of the pixel points in the first image and the total number of the pixel points, wherein the positions of each cell in the second initial coordinate conversion table correspond to the coordinates of the pixel points in the first image one by one;
the terminal firstly constructs a second initial coordinate conversion table according to the coordinates and the total number of pixel points in the first image, the total number of cells in the second initial coordinate conversion table is the same as the total number of pixel points in the first image, the total number of cells in each row of the second initial coordinate conversion table is the same as the number of pixel points in the horizontal direction in the first image, the total number of cells in each column of the second initial coordinate conversion table is the same as the number of pixel points in the vertical direction in the first image, for example, the total number of pixel points is 307200 for the first image, the total number of cells in each row of the second initial coordinate conversion table is 307200, the total number of cells in each row of the second initial coordinate conversion table is 640, and the total number of cells in each column of the second initial coordinate conversion table is 480. And the position of the cell in the second initial coordinate conversion table, i.e. the row and column where the cell is located, corresponds to the coordinates of the pixel in the first image one by one, for example, the 330 th row and 280 th column cell in the second initial coordinate conversion table corresponds to the coordinates of the pixel in the first image as (330, 280).
Step S520, taking the coordinates of the pixel points in the first bird 'S-eye view image as the numerical values of the corresponding cells in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird' S-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image;
because each cell in the second initial coordinate conversion table established by the terminal is not assigned, after the terminal establishes the second initial coordinate conversion table, the coordinate of the pixel point in the first image corresponding to each cell in the second initial coordinate conversion table is determined according to the corresponding relationship between the position of each cell in the second initial coordinate conversion table and the coordinate of the pixel point in the first image, then the coordinate of the pixel point in the first bird's-eye view image corresponding to each cell in the second initial coordinate conversion table is determined according to the corresponding relationship between the coordinate of the pixel point in the first bird's-eye view image and the coordinate of the pixel point in the first image, and finally the coordinate of the pixel point in the first bird's-eye view image is taken as the value of the corresponding cell in the second initial coordinate conversion table, for example, the coordinate of the pixel point in the first image corresponding to the cell in the line 280 in the line 330 in the second initial coordinate conversion table is (330, 280) and the coordinates (100, 30) of the pixel point in the first bird's-eye view image correspond to the coordinates (330, 280) of the pixel point in the first image, so that the cell at line 330, column 280 in the second initial coordinate conversion table corresponds to the coordinates (100, 30) of the pixel point in the first bird's-eye view image, and the terminal takes the coordinates (100, 30) of the pixel point as the numerical value in the cell at line 330, column 280 in the second initial coordinate conversion table.
Specifically, based on the above embodiment, the step S520 specifically includes:
step S521, converting the coordinates of the pixel points in the first bird 'S-eye view image into corresponding hexadecimal values according to a preset shift algorithm, where the preset shift algorithm is B ═ ((w &0xffff) < <16) | (h &0xffff), (w, h) are the coordinates of the pixel points in the first image, and B is the hexadecimal value corresponding to the coordinates (w, h) of the pixel points in the first bird' S-eye view image;
in step S522, according to the correspondence between the coordinates of the pixel points in the first bird 'S-eye view image and the coordinates of the pixel points in the first image and the correspondence between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image, the hexadecimal values corresponding to the coordinates of the pixel points in the first bird' S-eye view image are used as the numerical values of the corresponding cells in the second initial coordinate conversion table.
Because each cell in the second coordinate conversion table directly stores the coordinates of the pixel points in the first bird's-eye view image, the coordinates are represented by two numerical values, and the storage capacity is larger, and particularly, the more the total number of the pixel points of the first image is, the more the total number of the cells in the second coordinate conversion table is, the larger the storage capacity is. In order to reduce the storage amount of the second initial coordinate conversion table, the present example converts the coordinates of the pixel points in the first bird's eye view image of the first image to be stored into a numerical value, and then takes the numerical value as the numerical value of the corresponding cell, thereby reducing the storage amount thereof. The specific process is as follows: and converting the coordinates of the pixel points in the first bird's-eye view image into corresponding hexadecimal values according to a preset displacement algorithm B ═ ((w &0xffff) < <16) | (h &0xffff), wherein (w, h) is the coordinates of the pixel points in the first bird's-eye view image, and B is the hexadecimal value corresponding to the coordinates (w, h) of the pixel points in the first bird's-eye view image. For example, if the coordinate of a certain pixel point in the first bird's eye view image is (100, 30), the coordinate is converted into a hexadecimal value 0x00e4001e by a displacement algorithm B of ((100&0xffff) < <16| (30&0xffff)) -0 x00e4001 e.
The terminal determines the coordinates of the pixel points in the first image corresponding to each cell in the second initial coordinate conversion table according to the corresponding relationship between the position of each cell in the second initial coordinate conversion table and the coordinates of the pixel points in the first image, then determines the coordinates of the pixel points in the first bird's-eye view image corresponding to each cell in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image, and finally takes the hexadecimal values corresponding to the coordinates of the pixel points in the first bird's-eye view image as the numerical values of the corresponding cells in the second initial coordinate conversion table, for example, the coordinates of the pixel points in the first image corresponding to the cell in column 330 and column 280 in the second initial coordinate conversion table are (330, 280), while the coordinates of the pixel points in the first bird's-eye view image (100, 30) correspond to the coordinates of the pixel points in the first image are (330, 280) therefore, the cell at line 330, column 280 in the second initial coordinate conversion table corresponds to the coordinate of the pixel in the first bird's eye view image as (100, 30), and the terminal takes the hexadecimal value 0x00e4001e corresponding to the coordinate (100, 30) of the pixel as the numerical value in the cell at line 330, column 100 in the second initial coordinate conversion table.
In step S530, a second initial coordinate conversion table having a corresponding numerical value for each cell is used as a second coordinate conversion table.
And after the terminal gives a numerical value to each unit cell in the second initial coordinate conversion table, taking the second initial coordinate conversion table with each unit cell having a corresponding numerical value as the second coordinate conversion table, namely completing the establishment of the second coordinate conversion table.
Step S600, acquiring a third aerial view image;
step S700, coordinates of pixel points in the third aerial view image corresponding to the coordinates of the pixel points in the original image of the third aerial view image are inquired and obtained from the first coordinate conversion table;
step 800, obtaining pixel values corresponding to the pixel points in the original image according to the coordinates of the pixel points in the third bird 'S-eye view image corresponding to the coordinates of the pixel points in the obtained original image and the pixel values corresponding to the pixel points in the third bird' S-eye view image;
and S900, obtaining the original image according to the coordinates of the pixel points in the original image and the corresponding pixel values.
After a second coordinate conversion table is established at the terminal, after a subsequent terminal obtains a third bird's-eye view image, the third bird's-eye view image needs to be converted into a corresponding original image, the terminal can inquire coordinates of pixel points in the third bird's-eye view image corresponding to the pixel points in the original image of the third bird's-eye view image according to the second coordinate conversion table, then the pixel values of the pixel points corresponding to the coordinates in the third bird's-eye view image are obtained according to the coordinates of the pixel points in the corresponding third bird's-eye view image, the pixel values are used as the pixel values of the corresponding pixel points in the original image of the third bird's-eye view image, and finally, the pixel points in the original image of the third bird's-eye view image are given certain pixel values, so that the original image of the third bird's-eye view image is finally obtained.
Specifically, the terminal queries and obtains coordinates of pixel points in the third bird's-eye view image corresponding to coordinates of each pixel point in the original image of the third bird's-eye view image from the second coordinate conversion table, then obtains pixel values of the pixel points corresponding to the coordinates in the third bird's-eye view image according to the coordinates of the pixel points in the third bird's-eye view image corresponding to the coordinates of each pixel point in the obtained original image of the third bird's-eye view image, uses the pixel values as the pixel values of the pixel points corresponding to the original image of the third bird's-eye view image, and finally obtains the pixel values corresponding to each pixel point in the original image of the third bird's-eye view image; and the terminal composes the original image of the third bird's-eye view image according to the coordinates of each pixel point in the original image of the third bird's-eye view image and the pixel value corresponding to each pixel point. For example, the coordinates (330, 280) of the pixel points in the original image of the third bird's-eye view image correspond to the cell at line 330 and line 280 in the second coordinate conversion table, the numerical value of the cell at line 330 and line 280 in the second coordinate conversion table is looked up as (100, 30), the numerical value represents that the coordinates of the pixel points in the third bird's-eye view image are (100, 30), so that the coordinates (330, 280) of the pixel points in the original image of the third bird's-eye view image correspond to the coordinates (100, 30) of the pixel points in the third bird's-eye view image, and the terminal then takes the pixel values of the pixel points at the coordinates (100, 30) in the third bird's-eye view image as the pixel values of the pixel points at the coordinates (330, 280) in the original image of the third bird's-eye view image.
In the embodiment, a perspective transformation matrix is obtained by calibrating the camera; obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera; establishing a second coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image; and converting the obtained third bird's-eye view image into an original image through a second coordinate conversion table. The quick conversion is realized by directly looking up the table when the original image is converted by the aerial view, the time consumption during the image mutual conversion is effectively reduced, and the performances of the automatic driving and auxiliary driving system are greatly improved.
Furthermore, the present invention also provides a storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the image conversion method as described above.
The specific embodiment of the storage medium of the present invention is substantially the same as the embodiments of the image conversion method, and will not be described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image conversion method, characterized by comprising the steps of:
calibrating the camera to obtain a perspective transformation matrix;
obtaining a corresponding relation between the coordinates of the pixel points in the first aerial view image of the first image and the coordinates of the pixel points in the first image according to the perspective transformation matrix, the preset resolution and the coordinates of the pixel points in the first image collected by the camera;
establishing a first coordinate conversion table according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
and acquiring a second image in real time through the camera, and acquiring a second aerial view image of the second image according to the second image and the first coordinate conversion table.
2. The image conversion method according to claim 1, wherein the step of obtaining the correspondence between the coordinates of the pixel point in the first bird's eye view image and the coordinates of the pixel point in the first image according to the perspective transformation matrix, the coordinates of the pixel point in the first image acquired by the camera, and the preset resolution includes:
obtaining world coordinates corresponding to the coordinates of the pixel points in the first image according to the perspective transformation matrix and the coordinates of the pixel points in the first image;
obtaining coordinates of pixel points in the first aerial view image according to world coordinates corresponding to the coordinates of the pixel points in the first image and preset resolution;
and obtaining the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image.
3. The image conversion method according to claim 2, wherein the step of creating the first coordinate conversion table based on the correspondence between the coordinates of the pixel point in the first bird's eye view image and the coordinates of the pixel point in the first image includes:
establishing a first initial coordinate conversion table according to the coordinates of the pixel points in the first aerial view image and the total number of the pixel points, wherein the positions of all cells in the first initial coordinate conversion table correspond to the coordinates of the pixel points in the first aerial view image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first aerial view image, taking the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table;
and taking a first initial coordinate conversion table with each unit cell having a corresponding numerical value as a first coordinate conversion table.
4. The image conversion method according to claim 3, wherein the step of using the coordinates of the pixel points in the first image as the numerical values of the corresponding cells in the first initial coordinate conversion table based on the correspondence between the coordinates of the pixel points in the first bird's eye view image and the coordinates of the pixel points in the first image and the correspondence between the positions of the respective cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's eye view image comprises:
converting the coordinates of the pixel points in the first image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that A is ((u &0xffff) < <16) | (v &0xffff), (u, v) is the coordinates of the pixel points in the first image, and A is the hexadecimal value corresponding to the coordinates (u, v) of the pixel points in the first image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first image as the numerical value of the corresponding cell in the first initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the first initial coordinate conversion table and the coordinates of the pixel points in the first bird's-eye view image.
5. The image conversion method according to any one of claims 1 to 4, wherein the step of acquiring the second image in real time by the camera, and obtaining the second bird's-eye view image corresponding to the second image according to the second image and the established first coordinate conversion table comprises the steps of:
inquiring coordinates of pixel points in the second image corresponding to the coordinates of the pixel points in the second bird's-eye view image of the second image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the second aerial view image according to the coordinates of the pixel points in the second image corresponding to the coordinates of the pixel points in the second aerial view image and the pixel values corresponding to the pixel points in the second image;
and obtaining a second aerial view image according to the coordinates of each pixel point in the second aerial view image and the corresponding pixel value.
6. The image conversion method according to claim 5, wherein the step of obtaining the correspondence between the coordinates of the pixel point in the first bird's eye view image and the coordinates of the pixel point in the first image based on the perspective transformation matrix, the preset resolution, and the coordinates of the pixel point in the first image acquired by the camera further comprises:
establishing a second conversion numerical value table according to the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image;
acquiring a third aerial view image;
inquiring and obtaining coordinates of pixel points in the third aerial view image corresponding to the coordinates of the pixel points in the original image of the third aerial view image from the first coordinate conversion table;
obtaining pixel values corresponding to the pixel points in the original image according to the coordinates of the pixel points in the third bird's-eye view image corresponding to the coordinates of the pixel points in the obtained original image and the pixel values corresponding to the pixel points in the third bird's-eye view image;
and obtaining the original image according to the coordinates of the pixel points in the original image and the corresponding pixel values.
7. The image conversion method according to claim 6, wherein the step of creating the second conversion value table based on the coordinates of each pixel point in the first bird's eye view image and the coordinates of each pixel point in the first image includes:
establishing a second initial coordinate conversion table according to the coordinates of the pixel points in the first image and the total number of the pixel points, wherein the positions of all cells in the second initial coordinate conversion table correspond to the coordinates of the pixel points in the first image one by one;
according to the corresponding relation between the coordinates of the pixel points in the first aerial view image and the coordinates of the pixel points in the first image and the corresponding relation between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image, taking the coordinates of the pixel points in the first aerial view image as the numerical values of the corresponding cells in the second initial coordinate conversion table;
and taking a second initial coordinate conversion table with each unit cell having a corresponding numerical value as a second coordinate conversion table.
8. The method according to claim 7, wherein the step of using the coordinates of the pixel points in the first bird's eye view image as the numerical values of the corresponding cells in the second initial coordinate conversion table based on the correspondence between the coordinates of the pixel points in the first bird's eye view image and the coordinates of the pixel points in the first image and the correspondence between the positions of the respective cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image comprises:
converting the coordinates of the pixel points in the first bird's-eye view image into corresponding hexadecimal values according to a preset shifting algorithm, wherein the preset shifting algorithm is that B is ((w &0xffff) < <16) | (h &0xffff), (w, h) is the coordinates of the pixel points in the first image, and B is the hexadecimal value corresponding to the coordinates (w, h) of the pixel points in the first bird's-eye view image;
and taking the hexadecimal value corresponding to the coordinates of the pixel points in the first bird's-eye view image as the numerical value of the corresponding cell in the second initial coordinate conversion table according to the corresponding relationship between the coordinates of the pixel points in the first bird's-eye view image and the coordinates of the pixel points in the first image and the corresponding relationship between the positions of the cells in the second initial coordinate conversion table and the coordinates of the pixel points in the first image.
9. A terminal, characterized in that it comprises a memory, a processor and a computer program stored on said memory and executable on said processor, said processor implementing the steps of the image conversion method according to any one of claims 1 to 8 when executing said program.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image conversion method according to any one of claims 1 to 8.
CN202010044761.2A 2020-01-15 2020-01-15 Image conversion method, terminal and storage medium Active CN111242842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010044761.2A CN111242842B (en) 2020-01-15 2020-01-15 Image conversion method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010044761.2A CN111242842B (en) 2020-01-15 2020-01-15 Image conversion method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111242842A true CN111242842A (en) 2020-06-05
CN111242842B CN111242842B (en) 2023-11-10

Family

ID=70879578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010044761.2A Active CN111242842B (en) 2020-01-15 2020-01-15 Image conversion method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111242842B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986257A (en) * 2020-07-16 2020-11-24 南京模拟技术研究所 Bullet point identification automatic calibration method and system supporting variable distance
CN112132829A (en) * 2020-10-23 2020-12-25 北京百度网讯科技有限公司 Vehicle information detection method and device, electronic equipment and storage medium
CN112468716A (en) * 2020-11-02 2021-03-09 航天信息股份有限公司 Camera visual angle correction method and device, storage medium and electronic equipment
CN113689413A (en) * 2021-08-30 2021-11-23 深圳市睿达科技有限公司 Alignment correction method and device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN106373091A (en) * 2016-09-05 2017-02-01 山东省科学院自动化研究所 Automatic panorama parking aerial view image splicing method, system and vehicle
CN106856000A (en) * 2015-12-09 2017-06-16 广州汽车集团股份有限公司 A kind of vehicle-mounted panoramic image seamless splicing processing method and system
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN106856000A (en) * 2015-12-09 2017-06-16 广州汽车集团股份有限公司 A kind of vehicle-mounted panoramic image seamless splicing processing method and system
CN106373091A (en) * 2016-09-05 2017-02-01 山东省科学院自动化研究所 Automatic panorama parking aerial view image splicing method, system and vehicle
CN107424120A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of image split-joint method in panoramic looking-around system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986257A (en) * 2020-07-16 2020-11-24 南京模拟技术研究所 Bullet point identification automatic calibration method and system supporting variable distance
CN112132829A (en) * 2020-10-23 2020-12-25 北京百度网讯科技有限公司 Vehicle information detection method and device, electronic equipment and storage medium
CN112468716A (en) * 2020-11-02 2021-03-09 航天信息股份有限公司 Camera visual angle correction method and device, storage medium and electronic equipment
CN113689413A (en) * 2021-08-30 2021-11-23 深圳市睿达科技有限公司 Alignment correction method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN111242842B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN111242842A (en) Image conversion method, terminal and storage medium
CN110341597B (en) Vehicle-mounted panoramic video display system and method and vehicle-mounted controller
JP4931831B2 (en) Infrared camera system and method
US20190220775A1 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium
CN114881863B (en) Image splicing method, electronic equipment and computer readable storage medium
US20100020176A1 (en) Image processing system, imaging device, image processing method, and computer program
CN112348741A (en) Panoramic image splicing method, panoramic image splicing equipment, storage medium, display method and display system
CN111010545A (en) Vehicle driving decision method, system, terminal and storage medium
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN115147580A (en) Image processing apparatus, image processing method, mobile apparatus, and storage medium
CN112581389A (en) Virtual viewpoint depth map processing method, equipment, device and storage medium
CN111063292B (en) Color gamut mapping method, color gamut mapping component, display device, and storage medium
JP2015070350A (en) Monitor image presentation system
CN108282664B (en) Image processing method, device, system and computer readable storage medium
US11425355B2 (en) Depth image obtaining method, image capture device, and terminal
CN114091626A (en) True value detection method, device, equipment and storage medium
CN111959417B (en) Automobile panoramic image display control method, device, equipment and storage medium
CN113066158B (en) Vehicle-mounted all-round looking method and device
CN111726544A (en) Method and apparatus for enhancing video display
CN116168357A (en) Foreground target machine vision extraction system and method for intelligent vehicle
JP2005142657A (en) Apparatus for controlling display of surrounding of vehicle
CN114219840A (en) Image registration and fusion method and device and computer storage medium
CN115439548A (en) Camera calibration method, image splicing method, device, medium, camera and vehicle
CN112389459A (en) Man-machine interaction method and device based on panoramic looking-around
CN111240541A (en) Interface switching method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 14 / F, Beidou building, 6 Huida Road, Jiangbei new district, Nanjing, Jiangsu Province 210000

Applicant after: Jiangsu Zhongtian Anchi Technology Co.,Ltd.

Address before: 3 / F and 5 / F, building 2, Changyuan new material port, building B, Changyuan new material port, science and Technology Park community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ZHONGTIAN ANCHI Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant