CN113034605A - Target object position determining method and device, electronic equipment and storage medium - Google Patents

Target object position determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113034605A
CN113034605A CN201911360878.5A CN201911360878A CN113034605A CN 113034605 A CN113034605 A CN 113034605A CN 201911360878 A CN201911360878 A CN 201911360878A CN 113034605 A CN113034605 A CN 113034605A
Authority
CN
China
Prior art keywords
target object
determining
imaging plane
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360878.5A
Other languages
Chinese (zh)
Other versions
CN113034605B (en
Inventor
王洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Ningbo Geely Automobile Research and Development Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN201911360878.5A priority Critical patent/CN113034605B/en
Publication of CN113034605A publication Critical patent/CN113034605A/en
Application granted granted Critical
Publication of CN113034605B publication Critical patent/CN113034605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a method and a device for determining the position of a target object, electronic equipment and a storage medium, wherein images shot by a camera are acquired; determining parameters of a camera; determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal; determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length. Therefore, the accuracy of detecting the target object by the monocular camera can be improved, and the monocular camera is low in complexity and high in real-time performance.

Description

Target object position determining method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a method and an apparatus for determining a position of a target object, an electronic device, and a storage medium.
Background
The detection based on the visual obstacle is a very important technology in the unmanned driving, and the safety of the unmanned driving can be improved by visually detecting the position of the obstacle in the three-dimensional space. In order to realize 360-degree field angle coverage of the vehicle body, a plurality of cameras are arranged, including various types of cameras such as a long-focus camera, a short-focus camera, a wide-angle camera and a fisheye camera. The real world is a three-dimensional space, and two-dimensional images are obtained by imaging through a camera, so that depth information is lost, and therefore, in order to estimate obstacle position information, a common method is as follows: binocular ranging and monocular ranging. The binocular ranging is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using imaging equipment based on a parallax principle and calculating position deviation between corresponding points of the images. Monocular distance measurement mainly estimates the distance of an obstacle by using the geometric relation of pinhole imaging and the internal reference of a camera. As shown in fig. 1, the pinhole imaging model is the most commonly used model; wherein O is the center point of the camera coordinate system; the Z axis is the main axis of the camera; o is1Is the point where the principal axis intersects the imaging plane. At a certain point Q (X, Y, Z) in the world coordinate system, the imaging point is Q (X, Y, f) on the imaging plane. From the height of the obstacle and the height in the image, and from the camera parameters according to the theorem of similar triangles, we can estimate the distance of the obstacle.
The distance measurement scheme of the pinhole model is an ideal model, and a linear model is established according to the mapping relation between an imaging point and a target point, so that the distance can be well estimated for cameras with less lens distortion, such as a long focus and the like. However, the pinhole model distance measurement scheme does not take the influence of lens distortion into consideration, and therefore, a linear model cannot be obtained for cameras with wide angles, fish eyes, and the like, and therefore, the estimation of the distance error of an obstacle using the pinhole model scheme is relatively large.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining the position of a target object, an electronic device and a storage medium, which can improve the accuracy of detecting the target object by a monocular camera, and have the advantages of low complexity and high real-time property.
In one aspect, an embodiment of the present application provides a method for determining a position of a target object, including:
acquiring an image shot by a camera;
determining parameters of a camera;
determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal;
determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal;
determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane;
determining the actual height of the target object according to the type of the target object;
and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
In another aspect, an embodiment of the present application provides a device for determining a position of a target object, including:
the acquisition module is used for acquiring an image shot by the camera;
a first determining module for determining parameters of the camera;
a second determination module for determining a type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal;
the third determining module is used for determining the coordinate information of the characteristic points on the theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal;
the fourth determining module is used for determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane;
the fifth determining module is used for determining the actual height of the target object according to the type of the target object;
and the sixth determining module is used for determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
In another aspect, an embodiment of the present application provides an electronic device, where the device includes a processor and a memory, where the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded by the processor and executes the method for determining the position of the target object.
In another aspect, an embodiment of the present application provides a computer storage medium, where at least one instruction or at least one program is stored in the storage medium, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the above-mentioned method for determining a position of a target object.
The method, the device, the electronic equipment and the storage medium for determining the position of the target object have the following beneficial effects:
acquiring an image photographed by a camera; determining parameters of a camera; determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal; determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length. Therefore, the accuracy of detecting the target object by the monocular camera can be improved, and the monocular camera is low in complexity and high in real-time performance.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an aperture imaging model provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for determining a position of a target object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a bounding box of a target object in an image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a fisheye camera model provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a target object projected onto a theoretical imaging plane according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a target object position determining apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 2, fig. 2 is a schematic view of an application scenario provided in the embodiment of the present application, and the application scenario includes a vehicle 201 and a target object 202, where the vehicle 201 is configured with a plurality of cameras 2011. The vehicle 201 computationally determines the position of the target object 202 via the in-vehicle system based on the image captured by the camera 2011.
The onboard system of the vehicle 201 acquires an image captured by the camera 2011 and determines parameters of the camera 2011. The onboard system of the vehicle 201 determines the type of the target object 202 and feature point information of the target object 202 including the incident light angle and the undistorted focal length of the feature point based on the image and the parameters of the camera 2011. The onboard system of the vehicle 201 determines the coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length, wherein the theoretical imaging plane is an image after image undistortion. The on-board system of the vehicle 201 determines the height of the target object 202 in the theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane, and determines the actual height of the target object 202 according to the type of the target object 202. The on-board system of the vehicle 201 determines the position of the target object 202 based on the height of the target object 202 in the theoretical imaging plane, the actual height, and the undistorted focal length.
Alternatively, the camera 2011 may be any one of a fisheye camera, a wide-angle camera, a short-focus camera, and a long-focus camera.
A specific embodiment of a method for determining a position of a target object according to the present application is described below, and fig. 3 is a schematic flowchart of a method for determining a position of a target object according to an embodiment of the present application, where the method operation steps according to the embodiment or the flowchart are provided, but more or fewer operation steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 3, the method may include:
s301: an image captured by a camera is acquired.
S303: parameters of the camera are determined.
The execution subject of the embodiment of the present application may be an in-vehicle system of a vehicle. The vehicle is provided with a camera, and the on-board system acquires an image taken by the camera and determines parameters of the camera. The parameters of the camera include normalized focal length f, image center position information (u)0,v0) And a radial distortion parameter k1、k2
In an alternative embodiment of determining the normalized focal length f of the camera, the normalized focal length f may be determined according to equation (1):
f=(fx+fy)/2......(1)
wherein f isxRepresenting the normalized focal length of the x-axis in the camera coordinate system; f. ofyRepresenting the normalized focal length of the y-axis in the camera coordinate system.
Optionally, camera parameters fx、fyImage center position information (u)0,v0) And a radial distortion parameter k1、k2Can be measured by calibration.
S305: determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information includes an incident light angle of the characteristic point and a focal length after distortion removal.
In the embodiment of the application, the vehicle-mounted system detects the image by using a target detection algorithm to determine the type and the boundary box of the target object, determines the characteristic point of the target object based on the boundary box, and then obtains the normalized focal length f and the image center position information (u) according to the camera0,v0) And a radial distortion parameter k1、k2And calculating the incident light angle of the characteristic point and the focal length after distortion removal.
In an optional embodiment of determining the type of the target object and the feature point information of the target object based on the parameters of the image and the camera, the on-board system may determine the target object by using a YOLO algorithmReferring to fig. 4, fig. 4 is a schematic diagram of a bounding box of a target object in an image according to an embodiment of the present application, where the image plane coordinate system is xoy. The vehicle-mounted system firstly determines the coordinate of the upper left corner of the boundary frame as P1(x1,y1) And the coordinate of the lower right corner is P2(x2,y2) Then two end points P of the central line of the bounding box are selectedt、PbAs a characteristic point, and calculating according to formula (2) to obtain a characteristic point Pt、PbThe coordinate information of (2):
xt=(x1+x2)/2
yt=y1
xb=(x1+x2)/2
yb=y2.......(2)
wherein x ist,ytRespectively represent the characteristic points PtThe abscissa and ordinate of (a); x is the number ofb,ybRespectively represent the characteristic points PbThe abscissa and the ordinate.
Next, please refer to fig. 5, wherein fig. 5 is a schematic diagram of a fisheye camera model according to an embodiment of the present disclosure. The vehicle-mounted system is based on a fisheye camera model and according to the characteristic point Pt、PbCoordinate information of (d), image center position information (u)0,v0) And normalizing the focal length f to determine the characteristic point Pt、PbDistance to the center of the image. Specifically, the vehicle-mounted system may determine the distance from the feature point to the center of the image according to formula (3): dx (x)i=(xi-u0)/f
dyi=(yi-v0)/f......(3)
Wherein (x)i,yi) And (c) coordinate information indicating the feature points, i being t and b.
Secondly, the vehicle-mounted system is used for obtaining the distance dx from the characteristic point to the center of the imagei、dyiAnd a radial distortion parameter k1、k2The incident light angle of the feature point is determined. Specifically, the vehicle-mounted system can calculate the entry of the characteristic point according to the formula (4)The light emitting angle is as follows:
θd=sqrt(dxi 2+dyi 2)
θi=θd/(1+k1d 2+k2d 4)......(4)
wherein, thetaiI is t and b, which represent incident light angles of the feature points.
And secondly, determining the focal length of the characteristic point after distortion removal by the vehicle-mounted system according to the distance from the characteristic point to the center of the image and the normalized focal length f. Specifically, the on-board system may calculate the focal length after distortion removal according to formula (5):
fi=sqrt(dxi 2+dyi 2+f2)......(5)
wherein f isiAnd i is t and b, which represent the focal length of the characteristic point after distortion removal.
S307: determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is the image after the image is undistorted.
S309: and determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane.
In the embodiment of the present application, the number of feature points is 2. And the vehicle-mounted system determines the coordinate information of the characteristic points on the theoretical imaging plane, namely the undistorted coordinate information of the characteristic points according to the incident light angles of the characteristic points and the undistorted focal lengths. Then, the distance between 2 characteristic points is calculated according to a distance formula between the two points, and the distance between the 2 characteristic points is determined as the height of the target object in a theoretical imaging plane.
The description is continued on the basis of the above-described alternative embodiment. Referring to fig. 6, fig. 6 is a schematic diagram illustrating a target object projected onto a theoretical imaging plane according to an embodiment of the present application. In an optional embodiment of determining the coordinate information of the feature point on the theoretical imaging plane according to the incident light angle and the undistorted focal length, the vehicle-mounted system may calculate the coordinate information of the feature point on the theoretical imaging plane according to formula (6):
x′i=dxi*scale*fi
y′i=dyi*scale*fi......(6)
wherein scale ═ tan (θ)i)/θd;(x′i,y′i) The coordinates of the feature points on the theoretical imaging plane are represented, i ═ t, b.
Thus, the coordinates P 'of the feature point on the theoretical imaging plane are obtained't、P’bThen, calculating to obtain a characteristic point P 'based on a distance formula between two points't、P’bThe distance is determined as the height H of the target object in the theoretical imaging plane1
S311: and determining the actual height of the target object according to the type of the target object.
S313: and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
In the embodiment of the application, the vehicle-mounted system estimates the actual height of the target object according to the type of the target object, and then determines the distance from the target object to the camera by using a similar triangle theorem according to the height of the target object in a theoretical imaging plane, the actual height and the undistorted focal length. Second, the camera-based position of the target object, i.e., coordinate information of the target object in the camera coordinate system, is determined based on the distance of the target object from the camera and the incident light angle.
The description is continued on the basis of the above-described alternative embodiment. In an alternative embodiment, as shown in fig. 6, in which the position of the target object is determined based on the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length, assuming that the vehicle-mounted system determines that the type of the target object is a car, the actual height is estimated to be H2Then, the distance of the car from the camera is determined according to equation (7):
H1/H2=ft/d1=fb/d2......(7)
wherein d is1、d2Representing the distance of the target object from the camera.
Optionally, mixing d1、d2The distance with the smaller median value is determined as the distance from the target object to the camera, i.e. the distance of the target object on the Z-axis of the camera coordinate system.
Secondly, calculating and obtaining the coordinate information of the target object in the camera coordinate system according to the formula (8):
X=sinθi*dn
Y=cosθi*dn......(7)
wherein d isnRepresenting the distance of the target object on the Z axis of the camera coordinate system, wherein n is 1 or 2; thetaiRepresenting the incident light angle of the feature point; x, Y respectively represent the distances of the target objects on the camera coordinate system X, Y axes.
In summary, the method provided by the embodiment of the present application may be applied to a vehicle-mounted system of a vehicle, and determine the position of a target object based on a vehicle-mounted camera. The vehicle-mounted camera can be any one of a fisheye camera, a wide-angle camera, a short-focus camera and a long-focus camera, the application range is wide, the algorithm is simple, and the real-time performance is good.
An embodiment of the present application further provides a device for determining a position of a target object, and fig. 7 is a schematic structural diagram of the device for determining a position of a target object provided in the embodiment of the present application, and as shown in fig. 7, the device includes:
an obtaining module 701, configured to obtain an image captured by a camera;
a first determining module 702 for determining parameters of the camera;
a second determining module 703 for determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal;
a third determining module 704, configured to determine coordinate information of the feature point on the theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal;
the fourth determining module 705 is configured to determine the height of the target object in the theoretical imaging plane according to the coordinate information of the feature point on the theoretical imaging plane;
a fifth determining module 706, configured to determine an actual height of the target object according to the type of the target object;
a sixth determining module 707, configured to determine the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height, and the undistorted focal length.
The device and method embodiments in the embodiments of the present application are based on the same application concept.
The embodiment of the application provides an electronic device, which comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executes the position determination method of the target object.
Embodiments of the present application provide a computer storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the position determining method for the target object.
Optionally, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the embodiments of the method, the apparatus, the electronic device, or the storage medium for determining the position of the target object provided in the present application, an image captured by a camera is obtained; determining parameters of a camera; determining the type of the target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle of the characteristic point and a focal length after distortion removal; determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal; determining the height of the target object in the theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane; determining the actual height of the target object according to the type of the target object; and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length. Therefore, the accuracy of detecting the target object by the monocular camera can be improved, and the monocular camera is low in complexity and high in real-time performance.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for determining a position of a target object, comprising:
acquiring an image shot by a camera;
determining parameters of the camera;
determining a type of a target object and feature point information of the target object based on the image and parameters of the camera; the characteristic point information comprises an incident light angle and a focal length after distortion removal of the characteristic point;
determining coordinate information of the characteristic points on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal;
determining the height of the target object in a theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane;
determining the actual height of the target object according to the type of the target object;
and determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
2. The method of claim 1, wherein the parameters of the camera include a normalized focal length, image center position information, and radial distortion parameters;
the determining a type of a target object and feature point information of the target object based on the image and the parameters of the camera includes:
determining a type and bounding box of the target object based on the image;
determining coordinate information of the feature points based on the bounding box;
determining the distance from the characteristic point to the center of the image according to the coordinate information of the characteristic point, the position information of the center of the image and the normalized focal length;
determining the incident light angle of the characteristic point according to the distance from the characteristic point to the image center and the radial distortion parameter;
and determining the focal length of the characteristic point after distortion removal according to the distance from the characteristic point to the center of the image and the normalized focal length.
3. The method of claim 1, wherein the number of feature points is 2;
the determining the height of the target object in a theoretical imaging plane according to the coordinate information of the feature points on the theoretical imaging plane comprises:
determining the distance between the two characteristic points according to the coordinate information of the two characteristic points on the theoretical imaging plane;
determining a distance between the two feature points as a height of the target object within the theoretical imaging plane.
4. The method of claim 1, wherein determining the position of the target object based on the height of the target object in the theoretical imaging plane, the actual height, and the undistorted focal length comprises:
determining the distance from the target object to the camera based on a similar triangle theorem according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length;
determining a position of the target object based on the camera based on the distance of the target object to the camera and the incident light angle.
5. A target object position determining apparatus, comprising:
the acquisition module is used for acquiring an image shot by the camera;
a first determining module for determining parameters of the camera;
a second determination module for determining a type of a target object and feature point information of the target object based on the image and the parameters of the camera; the characteristic point information comprises an incident light angle and a focal length after distortion removal of the characteristic point;
the third determining module is used for determining the coordinate information of the characteristic point on a theoretical imaging plane according to the incident light angle and the undistorted focal length; the theoretical imaging plane is an image subjected to image distortion removal;
the fourth determining module is used for determining the height of the target object in a theoretical imaging plane according to the coordinate information of the characteristic points on the theoretical imaging plane;
a fifth determining module, configured to determine an actual height of the target object according to the type of the target object;
and the sixth determining module is used for determining the position of the target object according to the height of the target object in the theoretical imaging plane, the actual height and the undistorted focal length.
6. The apparatus of claim 5, wherein the parameters of the camera include a normalized focal length, image center position information, and radial distortion parameters;
the second determination module is further configured to determine a type and a bounding box of the target object based on the image; determining coordinate information of the feature points based on the bounding box; determining the distance from the characteristic point to the center of the image according to the coordinate information of the characteristic point, the position information of the center of the image and the normalized focal length; determining the incident light angle of the characteristic point according to the distance from the characteristic point to the image center and the radial distortion parameter; and determining the focal length of the characteristic point after distortion removal according to the distance from the characteristic point to the center of the image and the normalized focal length.
7. The apparatus of claim 5, wherein the number of feature points is 2;
the fourth determining module is further configured to determine a distance between the two feature points according to coordinate information of the two feature points on the theoretical imaging plane; determining a distance between the two feature points as a height of the target object within the theoretical imaging plane.
8. The apparatus of claim 5,
the sixth determining module is further configured to determine a distance from the target object to the camera based on a similar triangle theorem according to the height of the target object in the theoretical imaging plane, the actual height, and the undistorted focal length; determining a position of the target object based on the camera based on the distance of the target object to the camera and the incident light angle.
9. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein at least one instruction or at least one program is stored in the memory, and the at least one instruction or the at least one program is loaded by the processor and executes the method for determining the position of the target object according to any one of claims 1-4.
10. A computer storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded and executed by a processor to implement the method of position determination of a target object according to any one of claims 1 to 4.
CN201911360878.5A 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium Active CN113034605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360878.5A CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360878.5A CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113034605A true CN113034605A (en) 2021-06-25
CN113034605B CN113034605B (en) 2024-04-16

Family

ID=76458480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360878.5A Active CN113034605B (en) 2019-12-25 2019-12-25 Target object position determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113034605B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802159A (en) * 2023-02-01 2023-03-14 北京蓝色星际科技股份有限公司 Information display method and device, electronic equipment and storage medium
WO2023108385A1 (en) * 2021-12-14 2023-06-22 合肥英睿***技术有限公司 Target object positioning method and apparatus, and device and computer-readable storage medium
CN117111046A (en) * 2023-10-25 2023-11-24 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577002A (en) * 2009-06-16 2009-11-11 天津理工大学 Calibration method of fish-eye lens imaging system applied to target detection
WO2019000945A1 (en) * 2017-06-28 2019-01-03 京东方科技集团股份有限公司 On-board camera-based distance measurement method and apparatus, storage medium, and electronic device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577002A (en) * 2009-06-16 2009-11-11 天津理工大学 Calibration method of fish-eye lens imaging system applied to target detection
WO2019000945A1 (en) * 2017-06-28 2019-01-03 京东方科技集团股份有限公司 On-board camera-based distance measurement method and apparatus, storage medium, and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙少杰;杨晓东;: "基于计算机视觉的目标方位测量方法", 火力与指挥控制, no. 03 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023108385A1 (en) * 2021-12-14 2023-06-22 合肥英睿***技术有限公司 Target object positioning method and apparatus, and device and computer-readable storage medium
CN115802159A (en) * 2023-02-01 2023-03-14 北京蓝色星际科技股份有限公司 Information display method and device, electronic equipment and storage medium
CN115802159B (en) * 2023-02-01 2023-04-28 北京蓝色星际科技股份有限公司 Information display method and device, electronic equipment and storage medium
CN117111046A (en) * 2023-10-25 2023-11-24 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium
CN117111046B (en) * 2023-10-25 2024-01-12 深圳市安思疆科技有限公司 Distortion correction method, system, device and computer readable storage medium

Also Published As

Publication number Publication date
CN113034605B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US20180300901A1 (en) Camera calibration method, recording medium, and camera calibration apparatus
JP3859574B2 (en) 3D visual sensor
JP2874710B2 (en) 3D position measuring device
JP4825980B2 (en) Calibration method for fisheye camera.
JP6767998B2 (en) Estimating external parameters of the camera from the lines of the image
CN112444242B (en) Pose optimization method and device
CN113034605B (en) Target object position determining method and device, electronic equipment and storage medium
JP4803449B2 (en) On-vehicle camera calibration device, calibration method, and vehicle production method using this calibration method
JP4825971B2 (en) Distance calculation device, distance calculation method, structure analysis device, and structure analysis method.
Catalucci et al. Measurement of complex freeform additively manufactured parts by structured light and photogrammetry
JP6328327B2 (en) Image processing apparatus and image processing method
JP5070435B1 (en) Three-dimensional relative coordinate measuring apparatus and method
CN111879235A (en) Three-dimensional scanning detection method and system for bent pipe and computer equipment
EP2551633B1 (en) Three dimensional distance measuring device and method
JP2008096162A (en) Three-dimensional distance measuring sensor and three-dimensional distance measuring method
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN112837207B (en) Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera
JP2010181919A (en) Three-dimensional shape specifying device, three-dimensional shape specifying method, three-dimensional shape specifying program
CN114862973B (en) Space positioning method, device and equipment based on fixed point location and storage medium
CN112348890B (en) Space positioning method, device and computer readable storage medium
JP2013205175A (en) Device, method and program for recognizing three-dimensional target surface
CN110807431A (en) Object positioning method and device, electronic equipment and storage medium
CN110825079A (en) Map construction method and device
CN117848234A (en) Object scanning mechanism, method and related equipment
JP5648159B2 (en) Three-dimensional relative coordinate measuring apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant