CN114820672A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN114820672A
CN114820672A CN202210378886.8A CN202210378886A CN114820672A CN 114820672 A CN114820672 A CN 114820672A CN 202210378886 A CN202210378886 A CN 202210378886A CN 114820672 A CN114820672 A CN 114820672A
Authority
CN
China
Prior art keywords
image
initial
target
information
edge information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210378886.8A
Other languages
Chinese (zh)
Inventor
赵鹏依
胡中华
陈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Signaltone Intelligent Technology Co ltd
Original Assignee
Shenzhen Signaltone Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Signaltone Intelligent Technology Co ltd filed Critical Shenzhen Signaltone Intelligent Technology Co ltd
Priority to CN202210378886.8A priority Critical patent/CN114820672A/en
Publication of CN114820672A publication Critical patent/CN114820672A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, an image processing device and computer equipment. The method comprises the following steps: acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information; performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object; and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image. By adopting the method, the image processing efficiency can be improved.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to an image processing method, an image processing apparatus, a computer device, a storage medium, and a computer program product.
Background
With the development of the computer industry, moving object identification devices invented based on computer technology are widely used, for example, images of a moving object need to be identified. However, in the image capturing process, the moving object in the captured image may be inclined to different degrees, and it is necessary to perform processing such as horizontal detection and horizontal correction on the image. In the conventional image processing method, projection values of an object in an image are corrected by changing a θ angle to calculate projection values of the object in different directions. However, the existing image processing method can only correct the image of the static object in the application process, and cannot immediately respond to and process the acquired image of the moving object in time, so that the problem of low image processing accuracy exists
Disclosure of Invention
In view of the above, it is necessary to provide an image processing method, an apparatus, a computer device, a computer readable storage medium, and a computer program product capable of improving accuracy of image processing in view of the above technical problems.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object;
and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In one embodiment, the extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object includes:
performing target image edge extraction on the initial image to obtain target image edge information;
and extracting the edge of the target object based on the initial object edge information and the target image edge information to obtain the edge information of the target object corresponding to the moving object.
In one embodiment, performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object includes:
performing AND operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object, wherein the method comprises the following steps:
and performing AND operation on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
In one embodiment, performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image includes:
performing linear transformation on the edge information of the target object based on a preset linear threshold to obtain a linear set corresponding to the edge information of the target object;
calculating the horizontal angle corresponding to each straight line in the straight line set, and performing horizontal angle average calculation by using the horizontal angle corresponding to each straight line to obtain the current horizontal angle corresponding to the initial image;
and obtaining image deviation information corresponding to the initial image based on the difference value between the current horizontal angle and the preset standard vertical angle.
In one embodiment, the method further comprises:
acquiring historical horizontal angles of historical straight lines corresponding to historical target object edge information in a preset historical time period;
carrying out horizontal angle average calculation based on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image;
and obtaining target image deviation information corresponding to the initial image based on the difference value of the average horizontal angle and the preset standard vertical angle.
In one embodiment, the horizontal rectification of the initial image using the image deviation information to obtain a target image corresponding to the initial image includes:
when the image deviation information does not reach a preset image deviation threshold value, acquiring a corresponding preset correction parameter based on the image deviation information;
and horizontally correcting the initial image by using preset correction parameters to obtain a target image.
In one embodiment, after the original image is corrected by using a preset correction coefficient to obtain a target image corresponding to the original image, the method further includes:
acquiring an initial image sequence;
traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence;
and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying the target moving object based on the target moving object image to obtain a target moving object identification result.
In a second aspect, the present application further provides an image processing apparatus. The device comprises:
the error module is used for acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
an initial edge extraction module, configured to perform initial image edge extraction on the initial image to obtain initial image edge information, and perform initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
the target edge extraction module is used for extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object;
and the correction module is used for performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object;
and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object;
and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object;
and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
The image processing method, the image processing apparatus, the computer device, the storage medium and the computer program product are used for calculating the image error between the reference image and the initial image; and then, carrying out initial image edge extraction on the initial image, wherein the obtained initial image edge information is the initial edge information of the initial image. And performing initial object edge extraction on the error information and the initial image edge information to obtain initial object edge information. And then, carrying out target object edge extraction on the initial object edge information to obtain target object edge information which is more accurate edge information of the moving object. The image deviation information obtained by calculating the edge information of the target object is more accurate; furthermore, the initial image is horizontally corrected through the image deviation information, and the obtained target image is more accurate, so that the accuracy of image processing is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a schematic diagram of a process for edge extraction of a target object according to an embodiment;
FIG. 4 is a flow diagram illustrating the calculation of image deviation information in one embodiment;
FIG. 5 is a diagram illustrating image deviation information in one embodiment;
FIG. 6 is a schematic illustration of image deviation information in another embodiment;
FIG. 7 is a schematic flow chart illustrating the calculation of target image bias information according to one embodiment;
FIG. 8 is a flow diagram illustrating an exemplary embodiment of initial image level rectification;
FIG. 9 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device in one embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The terminal 102 may obtain an initial image of a moving object and a reference image adjacent to the initial image through the server 104, and calculate an image error between the reference image and the initial image to obtain error information; the terminal 102 performs initial image edge extraction on the initial image to obtain initial image edge information, and performs initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; the terminal 102 extracts the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object; the terminal 102 performs image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performs horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and it is understood that the method can also be applied to a server, and can also be applied to a system comprising the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the following steps:
step 202, acquiring an initial image of the moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information.
The moving object refers to an object in motion, and includes a human body, an object, and the like. The initial image refers to an image in an image sequence acquired by a moving object according to a preset frame rate in the moving process, and the initial image may have a problem of inclination of image content during acquisition. The reference image refers to an image adjacent to the initial image in the acquired image sequence. The image error refers to a difference between an initial image and a reference image. The error information refers to information of a difference between an initial image and a reference image, and may be represented graphically. In one embodiment, the error information may characterize a portion of the initial image where the moving object changes from the moving object in the reference image.
Specifically, the terminal may acquire, by using a data storage system in the server, an image sequence corresponding to the moving object, where the image sequence corresponding to the moving object may be an image sequence acquired by the imaging device for the moving object during a moving process at a preset frame rate, and the preset frame rate may be 50fps (Frames Per Second, transmission frame number). The image capture device then uploads the image sequence to a data storage system of the server. The terminal can also directly acquire the acquired image sequence from the camera equipment.
The terminal acquires an initial image and a reference image adjacent to the initial image from the acquired image sequence, wherein the reference image can be a frame image before the initial image or a frame image after the initial image, and preferably, the reference image is a frame image before the initial image. Then the terminal can calculate the pixel error between the reference image and the initial image through an interframe difference operation mode, and error information is obtained through the pixel error. For example, an error between a pixel value corresponding to each pixel point in the reference image and a pixel value corresponding to each pixel point in the initial image may be calculated, and an error image may be obtained according to the error corresponding to each pixel point. The terminal may also perform inter-frame difference operation on the reference image and the initial image to obtain error information, and perform binarization processing on the error information to obtain binarized error information.
And 204, performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object.
The initial image edge extraction refers to a process of performing preliminary extraction on the edge of an object in an initial image. The initial image includes a moving object region and a background region, and the background region is a region other than the moving object region in the initial image and includes objects other than moving objects, such as trees, buildings, and the like. The initial image edge information refers to an edge image of all objects including a moving object region and a background region in the initial image. The initial object edge extraction refers to a preliminary extraction process of the edge of the moving object in the moving object area in the initial image. The initial object edge information refers to an image of a moving object edge preliminarily extracted from an initial image.
Specifically, the terminal performs preliminary extraction on the edges of the objects in all the regions in the initial image to obtain initial image edge information corresponding to the initial image. And then the terminal performs preliminary extraction on the edge of the moving object in the initial image edge information according to the error information to obtain the preliminary extracted initial object edge information corresponding to the moving object.
In a specific embodiment, the terminal may perform preliminary extraction on the longitudinal edges of the objects in all the regions in the initial image to obtain images of the longitudinal edges of all the objects in the initial image edge information corresponding to the initial image. And then, the terminal performs initial extraction on the initial image edge information according to the error information to obtain a longitudinal edge image corresponding to the moving object in the initial image edge information.
And step 206, performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object.
The target object edge extraction refers to a process of performing line extraction on the edge of the moving object. The target object edge information refers to an image of a striped edge of the moving object.
Specifically, the terminal performs accurate edge extraction according to an edge image of the moving object preliminarily extracted from the initial object edge information to obtain target object edge information corresponding to the moving object, wherein the target object edge information is more accurate relative to the initial object edge information.
And 208, performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
The image deviation calculation refers to a process of calculating an initial image inclination angle. The image deviation information refers to an angle at which the initial image is tilted. The target image is an image obtained by horizontally correcting the initial image.
Specifically, the terminal uses the deviation angle between the longitudinal edge line and the horizontal line in the edge information of the target object to perform image deviation calculation, so as to obtain image deviation information corresponding to the initial image, and the image deviation information represents the deviation angle between the initial image and the horizontal line. And the terminal performs rotation correction of a corresponding angle on the initial image according to the image deviation information to obtain a target image after the initial image is corrected. The terminal may then store the target image locally for later use.
In the image processing method, the image error between the reference image and the initial image is calculated; and then performing initial image edge extraction on the initial image. And performing initial object edge extraction on the error information and the initial image edge information to obtain initial object edge information which represents the initial edge information of the moving object. And then, carrying out target object edge extraction on the initial object edge information to obtain target object edge information which is more accurate edge information of the moving object. The image deviation information obtained by calculating the edge information of the target object is more accurate; furthermore, the initial image is horizontally corrected through the image deviation information, and the obtained target image is more accurate. Thus, the accuracy of image processing is improved.
In one embodiment, as shown in fig. 3, a flow diagram of target object edge extraction is provided; step 204, performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object, including:
step 302, performing target image edge extraction on the initial image to obtain target image edge information;
and 304, performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
The target image edge extraction refers to a process of performing line extraction on the edges of all objects in the initial image. The target image edge image refers to an image of the striped edges of all objects in the initial image.
Specifically, the terminal performs streak extraction on all object edges in the initial image to obtain target image edge information corresponding to the initial image, and the target image edge information represents images of the streak edges of all objects in the initial image. The terminal may use a kenney edge detection algorithm to perform streak extraction on all objects in the initial image. And then the terminal extracts the same pixels according to the longitudinal edge of the moving object in the initial object edge information and the linear edge in the target image edge information to obtain target object edge information corresponding to the moving object and an image representing the longitudinal linear edge of the moving object.
The terminal can also use a Kenney edge detection algorithm to perform linear extraction on the edge of the moving object of the initial object edge information to obtain the target object edge information corresponding to the moving object.
In the embodiment, the initial object edge information extracted preliminarily is subjected to line edge extraction, so that the accuracy of extracting the edge of the moving object in the initial image is improved, the extracted edge of the moving object is more accurate, and the accuracy of image processing is improved.
In one embodiment, the step 204 of performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object includes:
performing AND operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
step 304, performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object, including:
and performing AND operation on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
The and operation is an operation for extracting the same edge information. The and operation of the error information and the initial image edge information refers to a process of extracting the same edge information in the error information and the initial image edge information. The anding of the initial object edge information and the target image edge information refers to a process of extracting the same edge information in the initial object edge information and the initial object edge information.
Specifically, the edge information may be a binarized pixel value. And performing AND operation on each binary pixel value in the error information and each binary pixel value in the initial image edge information to obtain initial object edge information. And performing AND operation on each binary pixel value in the initial object edge information and each binary pixel value in the target image edge information, and extracting the same binary pixel value to obtain the target object edge information.
In a specific embodiment, when the terminal detects that the initial image and the parameter image are the binarized images, the inter-frame difference operation is directly performed on the initial image and the parameter image to obtain error information of which the image type is the binarized difference image.
And when the terminal detects that the initial image and the parameter image are non-binary images, performing inter-frame difference operation on the initial image and the parameter image to obtain error information which is the non-binary difference image. And then, carrying out binarization processing on the non-binarization differential image to obtain error information which is a binarization differential image.
The terminal can extract the initial image edge of the initial image through a Sobel operator of the longitudinal template, namely extracting the longitudinal edges of all objects in the initial image to obtain initial image edge information which is non-binarized initial image edge information, and then binarizing the non-binarized initial image edge information to obtain binarized initial image edge information. The binarization process may be a boolean adaptive threshold binarization algorithm.
And then the terminal uses the error information of the binary differential image and the edge information of the binary initial image to carry out AND operation to obtain the edge information of the initial object corresponding to the moving object. In this embodiment, by performing and operation on the error information and the initial image edge information, an edge image corresponding to the moving object can be extracted from the target image edge information according to the contour of the moving object in the error information. By performing and operation on the initial object edge information and the target image edge information, an image of an accurate edge corresponding to the moving object can be extracted from the target image edge information according to the edge image corresponding to the moving object in the initial object edge information. Through two times of AND operation, the accuracy of the edge of the moving object in the initial image is further improved, and therefore the accuracy of image processing is improved.
In one embodiment, as shown in FIG. 4, a flow diagram for calculating image deviation information is provided; step 208, performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, including:
step 402, performing linear transformation on the edge information of the target object based on a preset linear threshold value to obtain a linear set corresponding to the edge information of the target object;
step 404, calculating a horizontal angle corresponding to each straight line in the straight line set, and performing average calculation of the horizontal angles by using the horizontal angles corresponding to the straight lines to obtain a current horizontal angle corresponding to the initial image;
and step 406, obtaining image deviation information corresponding to the initial image based on the difference between the current horizontal angle and the preset standard vertical angle.
The preset straight line threshold refers to a preset threshold for screening the interference lines in the edge information of the target object. The preset straight line threshold may be a length of a straight line representing a half height of the standard object in the set image. The straight line conversion refers to a process of converting edge information in the edge information of the target object into a straight line. The horizontal angle refers to the deviation angle between the straight line and the horizontal line of the image. The current horizontal angle refers to an average deviation angle of the horizontal angles corresponding to the respective straight lines. The horizontal angle average calculation refers to a process of calculating an average value of horizontal angles corresponding to respective straight lines. The preset standard horizontal angle is an angle between a preset image vertical line and an image horizontal line, and is generally 90 degrees.
Specifically, the terminal can convert the preset height edge of the sedan in the image into a corresponding straight line, and then takes half length of the converted straight line as a preset straight line threshold. The terminal can also directly call the preset straight line threshold value from the local storage system. Then the terminal converts the longitudinal edge of the moving object in the edge information of the target object into a corresponding straight line, and the terminal can perform straight line conversion by using a Hough transform straight line detection algorithm. And then the terminal screens the converted straight lines by using a preset straight line threshold value, and the straight lines reaching the preset straight line threshold value are used as a straight line set.
And then the terminal calculates the deviation angle between each straight line in the straight line set and the image horizontal line to obtain the horizontal angle corresponding to each straight line, and then the horizontal angles corresponding to each straight line are accumulated to obtain the accumulation result of the horizontal angle. And then the terminal counts the number of each straight line, and then calculates the ratio of the accumulated result to the number of each straight line to obtain the current horizontal angle corresponding to the initial image. And then the terminal calculates the difference value between the current horizontal angle and the preset standard horizontal angle to obtain the image deviation information corresponding to the initial image.
In one embodiment, as shown in FIG. 5, a schematic of image deviation information is provided; setting a counterclockwise direction as a positive direction, a as a longitudinal line of a moving object in an image, B as a transverse line of the moving object, e as an image horizontal line, d as an image vertical line, A as a current horizontal angle and representing a deviation angle between the longitudinal line a of the moving object and the image horizontal line e, and B as image deviation information and representing a deviation angle between the transverse line B of the moving object and the image horizontal line.
Then the terminal calculates the difference value between the current horizontal angle A and the preset standard vertical angle to obtain image deviation information B, and the calculation formula is shown as formula (1):
b ═ a-90 ° formula (1)
B can be a positive value or a negative value; when B is a positive value, it indicates that the initial image is tilted to the left, and when B is a negative value, it indicates that the initial image is tilted to the right. For example, if a is 80 °, B is 80 ° -90 ° -10 °, this indicates that the moving object in the initial image is tilted by 10 ° to the right in the vertical direction; if a is 130 °, then B is 130 ° -90 ° -40 °, indicating that the moving object in the initial image is tilted to the left by 40 ° in the vertical direction.
In another embodiment, as shown in FIG. 6, a schematic diagram of calculating image deviation information is provided; the terminal can calculate the deviation angle between each straight line in the straight line set and the image vertical line to obtain the vertical angle corresponding to each straight line, and then the vertical deviation corresponding to each straight line is accumulated to obtain the accumulation result of the vertical angle. Then the terminal counts the number of each straight line, then calculates the ratio of the accumulated result to the number of each straight line to obtain the average vertical deviation angle corresponding to the initial image, which represents the deviation angle between the longitudinal line of the moving object in the initial image and the vertical line of the image, and the terminal takes the average vertical angle as the image deviation information. And then the terminal judges the inclination direction of the moving object according to the quadrant of the longitudinal line of the moving object. In the figure, a is a longitudinal line of a moving object in an image, D is an image vertical line, and D is an average vertical deviation angle, which represents a deviation angle between the longitudinal line a of the moving object and the image vertical line D. When the longitudinal line a of the moving object is in the first quadrant, the moving object in the initial image is inclined to the right in the vertical direction; the longitudinal line a of the moving object in the second quadrant indicates that the moving object in the initial image is tilted to the left in the vertical direction.
In this embodiment, by performing linear transformation on the edge information of the target object, the initial image can be horizontally detected through the linear set, and the image deviation information corresponding to the initial image is obtained, so that the initial image can be corrected according to the image deviation information, and the image processing efficiency is improved.
In one embodiment, as shown in FIG. 7, a flow diagram for calculating target image deviation information is provided; the method further comprises the following steps:
step 702, acquiring historical horizontal angles of historical straight lines corresponding to historical target object edge information in a preset historical time period;
step 704, performing average calculation of horizontal angles based on the horizontal angles of the straight lines and the historical horizontal angles of the historical straight lines to obtain an average horizontal angle corresponding to the initial image;
and step 706, obtaining target image deviation information corresponding to the initial image based on the difference value between the average horizontal angle and the preset standard vertical angle.
Wherein the preset historical time period is a preset time period. The historical target object edge information is target object edge information corresponding to an initial image processed in a historical time period. The historical horizontal angle is a vertical angle of each straight line corresponding to the historical target object edge information. The average horizontal angle means an average value of the horizontal angle of each straight line and the historical horizontal angle of each historical straight line. The target image deviation information refers to the inclination angle of the moving object in the initial image.
Specifically, the terminal may obtain, from the local data storage system, the historical horizontal angles of the historical straight lines corresponding to the historical target object edge information in the historical time period according to the acquisition time of the initial image. The history time period may be a time period determined according to the moving speed and length of the moving object, and for example, the history time period may be set to 0.2 seconds. And then the terminal accumulates the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain a horizontal angle accumulation result. And the terminal counts the total number of each straight line and each historical straight line, and calculates the ratio of the horizontal angle accumulation result to the total number to obtain the average horizontal angle corresponding to the initial image.
And the terminal calculates the difference value between the average horizontal angle and a preset standard vertical angle to obtain the deviation angle between the horizontal line of the initial image and the image horizontal line.
In the embodiment, the historical horizontal angles of the historical straight lines corresponding to the historical target object edge information are introduced, and the horizontal angle average calculation is performed on the horizontal angles of the straight lines and the historical horizontal angles of the historical straight lines, so that the obtained average horizontal angle corresponding to the initial image is more accurate, errors caused by single calculation are avoided, and the accuracy of image processing is improved.
In one embodiment, step 208, performing horizontal rectification on the initial image by using the image deviation information to obtain a target image corresponding to the initial image, includes:
when the image deviation information does not reach a preset image deviation threshold value, acquiring a corresponding preset correction parameter based on the image deviation information;
and horizontally correcting the initial image by using preset correction parameters to obtain a target image.
The preset image deviation threshold is a preset judgment threshold used for judging whether the initial image is inclined or not. The preset correction parameters are the most suitable correction parameters for correcting the initial image, and can be acquired from a correction parameter library.
Specifically, the terminal obtains a preset image deviation threshold value, compares the image deviation information with the preset image deviation threshold value, when the image deviation information does not reach the preset image deviation threshold value, the terminal determines the range of the most suitable correction parameter in the correction parameter library according to the image deviation information, the terminal can determine the range of the most suitable correction parameter in a binary search mode, the most suitable correction parameter is sorted in the correction parameter library according to the angle deviation value in advance, and the angle deviation value corresponds to the correction coefficient one to one. And traversing each angle deviation value in the range according to the image deviation information, determining the angle deviation value which is closest to or identical to the image deviation information in each angle deviation value, and taking the correction parameter corresponding to the angle deviation value as the optimal correction parameter. And then the terminal can configure the optimal correction parameter to the image processing module, so that the image processing module corrects the initial image according to the configured optimal correction parameter to obtain the target image.
When the terminal detects that the image deviation information reaches the preset image deviation threshold value, the initial image is not inclined, the initial image is not corrected, and the initial image can be directly used for subsequent processing.
In the embodiment, the most suitable correction parameter can be quickly searched from the correction parameter library according to the image deviation information, and the initial image is corrected according to the correction parameter, so that the image processing efficiency is improved.
In an embodiment, in step 208, after the original image is corrected by using a preset correction coefficient to obtain a target image corresponding to the original image, the method further includes:
acquiring an initial image sequence;
traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence;
and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying the target moving object based on the target moving object image to obtain a target moving object identification result.
The initial image sequence refers to each initial image which is continuously acquired and is continuous in time during the moving process of the moving object, and different initial images may include different parts of the moving object, for example, a vehicle image sequence which is obtained from a head image to a tail image when the vehicle moves. The target moving object image refers to a spliced image with a complete moving object
Specifically, the terminal acquires an initial image sequence, searches for a corresponding target image according to each initial image in the initial image sequence, sequentially splices the target images according to the sequence order to obtain a target moving object image including a complete moving object, and then can perform subsequent processing such as object recognition on the target moving object image.
In this embodiment, the corrected target images are spliced to obtain a horizontal target moving object image, and the target moving object image is used for identification, so that the identification accuracy of the moving object can be improved.
In one embodiment, as shown in FIG. 8, a flow diagram of an initial image level rectification is provided; the terminal acquires an initial image sequence through an image acquisition unit according to a frame rate of 50fps, then acquires an initial image which needs to be processed currently from the initial image sequence, and acquires a reference image adjacent to the initial image. And the terminal inputs the initial image and the reference image into a difference analysis unit to perform interframe difference operation and binarization processing to obtain a difference binary image. And inputting the initial image into a vertical edge analysis unit, performing convolution operation on the initial image through a Sobel operator of a transverse template, and performing binarization processing to obtain a vertical edge binary image. And then the terminal performs AND operation on the differential binary image and the vertical edge binary image to obtain an initial vertical edge image corresponding to the moving object. And the terminal uses a Kenny edge detection algorithm to perform line edge extraction on the initial vertical edge map to obtain a more accurate line vertical edge map corresponding to the moving object.
And the terminal performs linear conversion on the linear vertical edge graph by using a Hough transform linear detection algorithm and performs linear screening by using a preset linear threshold value to obtain a linear set corresponding to the target linear vertical edge graph. The terminal calculates the horizontal angle corresponding to each straight line in the straight line set, and then obtains the historical horizontal angle of each historical straight line corresponding to the historical target object edge information in the preset historical time period. And the terminal carries out horizontal angle average calculation on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image, and then calculates the difference value between the average horizontal angle and a preset standard vertical angle to obtain target image deviation information corresponding to the initial image.
And the terminal searches the optimal correction parameter in the correction parameter library according to the target image deviation information, and configures the optimal correction parameter to the image processing module, so that the image processing module corrects the initial image according to the configured optimal correction parameter to obtain the target image.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image processing apparatus for implementing the image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image processing apparatus provided below can be referred to the limitations of the image processing method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 9, there is provided an image processing apparatus 900 including: an error module 902, an initial edge extraction module 904, a target edge extraction module 906, and a remediation module 908, wherein:
an error module 902, configured to obtain an initial image of a moving object and a reference image adjacent to the initial image, and calculate an image error between the reference image and the initial image to obtain error information;
an initial edge extraction module 904, configured to perform initial image edge extraction on the initial image to obtain initial image edge information, and perform initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
a target edge extraction module 906, configured to perform target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object;
a correcting module 908, configured to perform image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and perform horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In one embodiment, the target edge extraction module 906 includes:
the object edge extraction unit is used for extracting the edge of the target image from the initial image to obtain the edge information of the target image;
and performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
In one embodiment, the initial edge extraction module 904 includes:
the AND operation unit is used for carrying out AND operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
and performing AND operation on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
In one embodiment, the orthotic module 908, comprises:
the conversion unit is used for carrying out linear conversion on the edge information of the target object based on a preset linear threshold value to obtain a linear set corresponding to the edge information of the target object;
calculating the horizontal angle corresponding to each straight line in the straight line set, and performing horizontal angle average calculation by using the horizontal angle corresponding to each straight line to obtain the current horizontal angle corresponding to the initial image;
and obtaining image deviation information corresponding to the initial image based on the difference value between the current horizontal angle and the preset standard vertical angle.
In one embodiment, the image processing apparatus 900 further includes:
the historical information unit is used for acquiring historical horizontal angles of historical straight lines corresponding to the historical target object edge information in a preset historical time period;
carrying out horizontal angle average calculation based on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image;
and obtaining target image deviation information corresponding to the initial image based on the difference value of the average horizontal angle and the preset standard vertical angle.
In one embodiment, the orthotic module 908, comprises:
the threshold value judging unit is used for acquiring corresponding preset correction parameters based on the image deviation information when the image deviation information does not reach a preset image deviation threshold value; and horizontally correcting the initial image by using preset correction parameters to obtain a target image.
In one embodiment, the image processing apparatus 900 further includes:
the splicing unit is used for acquiring an initial image sequence; traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence;
and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying the target moving object based on the target moving object image to obtain a target moving object identification result.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the initial image sequence. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement an image processing method.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configurations shown in fig. 10-11 are only block diagrams of some of the configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than shown in the figures, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information; performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object; and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object, including: performing target image edge extraction on the initial image to obtain target image edge information; and extracting the edge of the target object based on the initial object edge information and the target image edge information to obtain the edge information of the target object corresponding to the moving object.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object, including: performing AND operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object, wherein the method comprises the following steps: and performing AND operation on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, wherein the image deviation calculation comprises the following steps: performing linear transformation on the edge information of the target object based on a preset linear threshold value to obtain a linear set corresponding to the edge information of the target object; calculating the horizontal angle corresponding to each straight line in the straight line set, and performing horizontal angle average calculation by using the horizontal angle corresponding to each straight line to obtain the current horizontal angle corresponding to the initial image; and obtaining image deviation information corresponding to the initial image based on the difference value between the current horizontal angle and the preset standard vertical angle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the method further comprises the following steps: acquiring historical horizontal angles of historical straight lines corresponding to historical target object edge information in a preset historical time period; carrying out horizontal angle average calculation based on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image; and obtaining target image deviation information corresponding to the initial image based on the difference value of the average horizontal angle and the preset standard vertical angle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
using the image deviation information to horizontally correct the initial image to obtain a target image corresponding to the initial image, comprising: when the image deviation information does not reach a preset image deviation threshold value, acquiring a corresponding preset correction parameter based on the image deviation information; and horizontally correcting the initial image by using preset correction parameters to obtain a target image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
after the original image is corrected by using a preset correction coefficient to obtain a target image corresponding to the original image, the method further comprises the following steps: acquiring an initial image sequence; traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence; and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying the target moving object based on the target moving object image to obtain a target moving object identification result.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information; performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction on the basis of the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; extracting the edge of the target object based on the initial object edge information to obtain the edge information of the target object corresponding to the moving object; and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object, including: performing target image edge extraction on the initial image to obtain target image edge information; and extracting the edge of the target object based on the initial object edge information and the target image edge information to obtain the edge information of the target object corresponding to the moving object.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object, including: performing AND operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object; performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object, wherein the method comprises the following steps: and performing AND operation on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, wherein the image deviation calculation comprises the following steps: performing linear transformation on the edge information of the target object based on a preset linear threshold value to obtain a linear set corresponding to the edge information of the target object; calculating the horizontal angle corresponding to each straight line in the straight line set, and performing average calculation on the horizontal angles by using the horizontal angles corresponding to the straight lines to obtain the current horizontal angle corresponding to the initial image; and obtaining image deviation information corresponding to the initial image based on the difference value between the current horizontal angle and the preset standard vertical angle.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method further comprises the following steps: acquiring historical horizontal angles of historical straight lines corresponding to historical target object edge information in a preset historical time period; carrying out horizontal angle average calculation based on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image; and obtaining target image deviation information corresponding to the initial image based on the difference value of the average horizontal angle and the preset standard vertical angle.
In one embodiment, the computer program when executed by the processor further performs the steps of:
using the image deviation information to horizontally correct the initial image to obtain a target image corresponding to the initial image, comprising: when the image deviation information does not reach a preset image deviation threshold value, acquiring a corresponding preset correction parameter based on the image deviation information; and horizontally correcting the initial image by using preset correction parameters to obtain a target image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
after the original image is corrected by using a preset correction coefficient to obtain a target image corresponding to the original image, the method further comprises the following steps: acquiring an initial image sequence; traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence; and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying the target moving object based on the target moving object image to obtain a target moving object identification result.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
performing initial image edge extraction on the initial image to obtain initial image edge information, and performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object;
and performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
2. The method according to claim 1, wherein the performing target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object includes:
performing target image edge extraction on the initial image to obtain target image edge information;
and performing target object edge extraction based on the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
3. The method according to claim 2, wherein the performing initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object comprises:
performing and operation on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
the extracting the edge of the target object based on the initial object edge information and the target image edge information to obtain the edge information of the target object corresponding to the moving object includes:
and calculating the initial object edge information and the target image edge information to obtain target object edge information corresponding to the moving object.
4. The method according to claim 1, wherein the performing an image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image comprises:
performing linear transformation on the edge information of the target object based on the preset linear threshold value to obtain a linear set corresponding to the edge information of the target object;
calculating the horizontal angle corresponding to each straight line in the straight line set, and performing horizontal angle average calculation by using the horizontal angle corresponding to each straight line to obtain the current horizontal angle corresponding to the initial image;
and obtaining image deviation information corresponding to the initial image based on the difference value between the current horizontal angle and a preset standard vertical angle.
5. The method of claim 4, further comprising:
acquiring historical horizontal angles of historical straight lines corresponding to historical target object edge information in a preset historical time period;
performing horizontal angle average calculation based on the horizontal angle of each straight line and the historical horizontal angle of each historical straight line to obtain an average horizontal angle corresponding to the initial image;
and obtaining target image deviation information corresponding to the initial image based on the difference value between the average horizontal angle and a preset standard vertical angle.
6. The method according to claim 1, wherein the horizontally rectifying the initial image by using the image deviation information to obtain a target image corresponding to the initial image comprises:
when the image deviation information does not reach a preset image deviation threshold value, acquiring a corresponding preset correction parameter based on the image deviation information;
and horizontally correcting the initial image by using the preset correction parameters to obtain the target image.
7. The method according to claim 1, further comprising, after the correcting the original image by using the preset correction coefficient to obtain a target image corresponding to the original image:
acquiring an initial image sequence;
traversing each initial image in the initial image sequence to obtain a target image sequence corresponding to the initial image sequence;
and sequentially splicing all target images in the target image sequence to obtain a target moving object image, and identifying a target moving object based on the target moving object image to obtain a target moving object identification result.
8. An image processing apparatus, characterized in that the apparatus comprises:
the error module is used for acquiring an initial image of a moving object and a reference image adjacent to the initial image, and calculating an image error between the reference image and the initial image to obtain error information;
an initial edge extraction module, configured to perform initial image edge extraction on the initial image to obtain initial image edge information, and perform initial object edge extraction based on the error information and the initial image edge information to obtain initial object edge information corresponding to the moving object;
a target edge extraction module, configured to perform target object edge extraction based on the initial object edge information to obtain target object edge information corresponding to the moving object;
and the correction module is used for performing image deviation calculation based on the edge information of the target object to obtain image deviation information corresponding to the initial image, and performing horizontal correction on the initial image by using the image deviation information to obtain a target image corresponding to the initial image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210378886.8A 2022-04-12 2022-04-12 Image processing method, image processing device, computer equipment and storage medium Pending CN114820672A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378886.8A CN114820672A (en) 2022-04-12 2022-04-12 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378886.8A CN114820672A (en) 2022-04-12 2022-04-12 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114820672A true CN114820672A (en) 2022-07-29

Family

ID=82534473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378886.8A Pending CN114820672A (en) 2022-04-12 2022-04-12 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114820672A (en)

Similar Documents

Publication Publication Date Title
CN108009543B (en) License plate recognition method and device
US10453204B2 (en) Image alignment for burst mode images
CN109815770B (en) Two-dimensional code detection method, device and system
CN108875723B (en) Object detection method, device and system and storage medium
CN113869293B (en) Lane line recognition method and device, electronic equipment and computer readable medium
CN106560840B (en) A kind of image information identifying processing method and device
US9384398B2 (en) Method and apparatus for roof type classification and reconstruction based on two dimensional aerial images
CN112101317B (en) Page direction identification method, device, equipment and computer readable storage medium
US11900676B2 (en) Method and apparatus for detecting target in video, computing device, and storage medium
US20210390282A1 (en) Training data increment method, electronic apparatus and computer-readable medium
CN113496208B (en) Video scene classification method and device, storage medium and terminal
CN111080665B (en) Image frame recognition method, device, equipment and computer storage medium
CN110956131A (en) Single-target tracking method, device and system
CN113177941B (en) Steel coil edge crack identification method, system, medium and terminal
CN112819889B (en) Method and device for determining position information, storage medium and electronic device
CN112036232B (en) Image table structure identification method, system, terminal and storage medium
CN113888438A (en) Image processing method, device and storage medium
CN109451318B (en) Method, apparatus, electronic device and storage medium for facilitating VR video encoding
CN110689556A (en) Tracking method and device and intelligent equipment
CN110874814A (en) Image processing method, image processing device and terminal equipment
CN114820672A (en) Image processing method, image processing device, computer equipment and storage medium
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
CN112634141B (en) License plate correction method, device, equipment and medium
CN113255405B (en) Parking space line identification method and system, parking space line identification equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination