CN114255272A - Positioning method and device based on target image - Google Patents

Positioning method and device based on target image Download PDF

Info

Publication number
CN114255272A
CN114255272A CN202011024413.5A CN202011024413A CN114255272A CN 114255272 A CN114255272 A CN 114255272A CN 202011024413 A CN202011024413 A CN 202011024413A CN 114255272 A CN114255272 A CN 114255272A
Authority
CN
China
Prior art keywords
energy
curve
target image
target
coordinate data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011024413.5A
Other languages
Chinese (zh)
Inventor
彭登
陶永康
韩定
梁炜岳
卢佳
江敏瑶
黄焯豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bozhilin Robot Co Ltd
Original Assignee
Guangdong Bozhilin Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bozhilin Robot Co Ltd filed Critical Guangdong Bozhilin Robot Co Ltd
Priority to CN202011024413.5A priority Critical patent/CN114255272A/en
Publication of CN114255272A publication Critical patent/CN114255272A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a positioning method and device based on a target image, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image and initial coordinate data of a target; generating a section line on the target image according to the initial coordinate data, and extracting a gray value curve of the section line; calculating the energy of the section line by adopting an energy function based on the gray value curve; a total energy distribution curve is generated from the energy of the sectional line, and final coordinate data of the target is identified from the total energy distribution curve. Therefore, the technical problems of more time consumption, higher cost, high image quality requirement and the like in the target image positioning in the related technology are solved, the target positioning precision can be improved, and the data processing efficiency and the target image positioning speed can be greatly improved.

Description

Positioning method and device based on target image
Technical Field
The present disclosure relates to the field of target positioning technologies, and in particular, to a positioning method and apparatus based on a target image.
Background
Target positioning is generally used in the fields of ranging or calibration, and one key of ranging or calibration is positioning. However, the positioning method depending on vision is limited in resolution, and as the distance increases, the image quality gradually decreases and becomes more and more blurred, the jagging is serious, and the direct influence of insufficient resolution leads to too large positioning error to be used, or the measured distance is limited.
In the related art, a super-resolution technology is generally used for restoring a low-resolution image into an ultrahigh-resolution image so as to improve the positioning accuracy; or the sub-pixel corner detection realizes the high-precision positioning of the sub-pixels.
However, the super-resolution technique is not only time-consuming, but also costly; the sub-pixel corner detection has severe requirements on image quality, and may not be detected due to outdoor strong light environment, or detection of multiple false corners, or corrosion of corners, and no good solution exists in the industry at present, and a solution is urgently needed.
Content of application
The application provides a positioning method and device based on a target image, and aims to solve the technical problems of more time consumption, higher cost, high requirement on image quality and the like in the positioning of the target image in the related technology.
The embodiment of the first aspect of the present application provides a target image-based positioning method, including the following steps:
acquiring a target image and initial coordinate data of a target;
generating a section line on the target image according to the initial coordinate data, and extracting a gray value curve of the section line;
calculating the energy of the transversal line by adopting an energy function based on the gray value curve;
generating a total energy distribution curve according to the energy of the sectional line, and identifying final coordinate data of the target according to the total energy distribution curve.
Optionally, the generating a section line on the target image according to the initial coordinate data comprises: and adding a preset pixel value to the initial coordinate data to generate two sections of lines in the X-axis direction and two sections of lines in the Y-axis direction on the target image.
Optionally, the calculating the energy of the section line by using an energy function based on the gray value curve comprises:
generating an elastic force distribution curve through a preset mechanical model based on the gray value curve, and calculating curve energy of the elastic force distribution curve;
calculating an energy of the target image;
and summing the curve energy and the energy of the target image to obtain the energy of the transversal line.
Optionally, the generating an elastic force distribution curve through a preset mechanical model based on the gray value curve includes:
and calculating the first derivative of the gray value curve to obtain the elastic force distribution curve.
Optionally, the formula for calculating the curve energy is:
Figure BDA0002701718650000021
wherein, F represents the elastic force corresponding to the elastic force distribution curve, F represents the curve function of the elastic force distribution curve, and x represents the coordinate value in the curve function.
Optionally, the calculation formula of the image energy is:
Figure BDA0002701718650000031
wherein E isimageAs image energy, GσIs a standardThe difference, I, is the gray value of the image.
Optionally, the calculating the energy of the section line by using an energy function based on the gray value curve comprises:
and smoothly fitting the gray value curve, and calculating the energy of the transversal line by adopting an energy function based on the curve after smooth fitting.
Optionally, the smoothly fitting the gray value curve includes:
taking an original positioning point obtained by the initial coordinate data as a center, and intercepting any section of interval to obtain the fitting limit interval;
and performing smooth fitting on the gray value curve according to the fitting limit interval.
Optionally, the generating a total energy profile from the energy of the transversal and identifying final coordinate data of the target from the total energy profile comprises:
adding the energies of the transversal lines in the X-axis direction and generating a total energy distribution curve in the X-axis direction, adding the energies of the transversal lines in the Y-axis direction and generating a total energy distribution curve in the Y-axis direction, and determining final coordinate data of the target based on the minimum value of the total energy distribution curve in the X-axis direction and the minimum value of the total energy distribution curve in the Y-axis direction.
The second aspect of the present application provides a target image-based positioning device, including:
the acquisition module acquires a target image and initial coordinate data of a target;
the extraction module generates a section line on the target image according to the initial coordinate data and extracts a gray value curve of the section line;
the calculating module is used for calculating the energy of the section line by adopting an energy function based on the gray value curve;
and the identification module is used for generating a total energy distribution curve according to the energy of the section line and identifying the final coordinate data of the target according to the total energy distribution curve.
An embodiment of a third aspect of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and configured to perform a target image-based localization method as described in the above embodiments.
A fourth aspect of the present application provides a computer-readable storage medium storing computer instructions for causing the computer to execute the target image-based localization method according to the above embodiments.
The elastic force distribution curve of the target image and the initial coordinate data is extracted, the energy distribution curve is generated after the elastic force distribution curve is subjected to smooth fitting, the final coordinate data of the target is identified according to the energy distribution curve, the input target imaging quality does not need to be changed through the related technology, the technical problems that time is long, cost is high, requirements on image quality are high and the like when the target image is positioned in the related technology are solved, the target positioning precision can be improved, and the data processing efficiency and the target image positioning speed can be greatly improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a target image-based localization method according to an embodiment of the present application;
FIG. 2 is a schematic view of a predetermined mechanical model according to one embodiment of the present application;
FIG. 3 is a schematic view of a mechanical model analysis of a target image-based localization method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a fitted curve according to one embodiment of the present application;
FIG. 5 is a schematic illustration of a curve analysis performed according to one embodiment of the present application;
FIG. 6 is a schematic comparison of visual effects of target localization according to one embodiment of the present application;
FIG. 7 is a statistical comparison of errors generated by target localization according to one embodiment of the present application;
FIG. 8 is a detailed flow chart of a method for target image based localization according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a method for target image-based localization according to an embodiment of the present application;
FIG. 10 is an exemplary diagram of a target image based positioning device according to an embodiment of the present application;
fig. 11 is an exemplary diagram of an electronic device according to an embodiment of the application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a target image-based localization method and apparatus according to an embodiment of the present application with reference to the drawings. In order to solve the technical problems of time consumption, high cost, high requirement on image quality and the like in the related art mentioned in the background technology center, the application provides a positioning method based on a target image.
Specifically, fig. 1 is a schematic flowchart of a target image-based positioning method according to an embodiment of the present disclosure.
As shown in fig. 1, the target image-based localization method includes the following steps:
in step S101, a target image and initial coordinate data of a target are acquired.
It is understood that the target image may be an ROI region of interest extracted from the target imaging image, wherein the ROI may be a region to be processed which is delineated by a box, a circle, an ellipse, an irregular polygon, etc. from the processed image in machine vision, image processing. Various operators (operators) and functions are commonly used in machine vision software such as Halcon, OpenCV, Matlab and the like to obtain a region of interest (ROI), and the image is processed in the next step. The initial coordinate data of the target can be obtained by the original positioning algorithm according to the target image identification, the obtaining mode is the same as that in the related technology, and detailed description is not provided herein for avoiding redundancy. The initial coordinates have a certain error with the ideal midpoint for positioning in the ROI region of interest, and the scheme is to obtain final coordinates closer to the ideal midpoint by processing based on the initial coordinates.
In step S102, a section line is generated on the target image from the initial coordinate data, and a gray scale value curve of the section line is extracted.
Optionally, in some embodiments, generating the section line on the target image according to the initial coordinate data comprises: and generating two sections in the X-axis direction and two sections in the Y-axis direction on the target image after adding or subtracting a preset pixel value from the initial coordinate data.
Specifically, assuming that the initial coordinate data is (X, Y) and the predetermined pixel value is 25, the generated two lines in the X-axis direction may be the X +25 th line and the X-25 th line, respectively, and the generated two lines in the Y-axis direction may be the Y +25 th line and the Y-25 th line, respectively, so that the gray scale value curve of the sectional line may be extracted from the generated sectional line.
It should be noted that the predetermined pixel value can be determined according to the total pixel value of the target image, for example, the total pixel value of the target image is 100, and one fourth of the total pixel value is taken as the predetermined pixel value. The size of the device can be preset by a user or obtained through limited-time computer simulation, and is not particularly limited herein.
In step S103, the energy of the sectional line is calculated using an energy function based on the gradation value curve.
Optionally, in some embodiments, calculating the energy of the section line using the energy function based on the gray value curve comprises: generating an elastic force distribution curve through a preset mechanical model based on the gray value curve, and calculating curve energy of the elastic force distribution curve; calculating the energy of the target image; and summing the curve energy and the energy of the target image to obtain the energy of the transversal line.
Wherein, the preset mechanical model can be an infinite length vertical metal rod passing through the point to be adjusted, and can move left and right (x-axis direction) integrally under the action of two elastic forces (because the metal rod is infinite length, it can be simplified into mass point, i.e. the acting force can only make it move integrally without deflection), in combination with fig. 2 and fig. 3, in fig. 2, the dotted line is the image sectional line, the central dot O is the original positioning point, d in fig. 3 can represent the length, F1 and F2 both represent the elastic forces, assuming that F1 is the elastic force from the sectional line position on the upper part of the image, such as the dotted line on the upper part in fig. 3, F2 is the elastic force from the dotted line on the lower part, under the action of two elastic forces, the metal rod moves left and right (the point to be adjusted moves along with the movement), but the magnitudes of the two elastic forces will change at the same time of the movement, and only when the magnitudes of the two forces are completely equal, equilibrium is reached.
Further, in some embodiments, generating the elastic force distribution curve through a preset mechanical model based on the gray value curve includes: and intercepting a gray curve on a sectional line of the gray image, and calculating a first derivative of the gray curve to obtain an elastic force distribution curve.
It will be appreciated that, by definition of the spring force, the first derivative/gradient of the curve is defined as the spring force
Figure BDA0002701718650000071
Therefore, a gray value curve on the image sectional line is extracted, as shown in a curve I in fig. 3, and a first-stage derivative is obtained to obtain an elastic force distribution curve, as shown in a curve II in fig. 3, wherein upward indicates positive, downward indicates negative, and positive and negative are directions of force.
Optionally, in some embodiments, calculating the energy of the section line using the energy function based on the gray value curve comprises: and smoothly fitting the gray value curve, and calculating the energy of the transversal line by adopting an energy function based on the curve subjected to smooth fitting.
Wherein, in some embodiments, smoothly fitting the gray value curve comprises: taking an original positioning point obtained by the initial coordinate data as a center, and intercepting any section of interval to obtain a fitting limit interval; and smoothly fitting the gray value curve according to the fitting limit interval.
It is understood that the gray scale curve fitting can effectively circumvent the resolution limit. Since the resolution of the image acquired by the measuring device is usually not too high, the image shown in fig. 2 has only 30W pixels, and a smooth fit to the low resolution pixels input by the algorithm is necessary to improve the accuracy of the positioning.
In order to reduce the calculation amount and improve the efficiency, only the pixel condition on the image transversal (upper, lower, left and right, 4 points in total; only the upper and lower transversal are considered for horizontal fine adjustment; only the left and right transversal are considered for upper and lower fine adjustment) is considered. Another benefit of the fit is that image pixels are discontinuous (i.e., pixels are integers) and can become continuous intervals when sub-pixel accurate. The algorithm of the embodiment of the present application may adopt polynomial fitting, and the order n is selected to be 9 in consideration of the calculation amount and the fitting effect. As shown in the following formula:
y=a1x1+a2x2+a3x3…+a9x9
where a represents a polynomial coefficient.
It should be noted that, the fitting is limited in an interval, which means that only a certain interval centered on the original positioning point is extracted when fitting the gray level curve on the stub, rather than the complete stub. Due to noise effects, the fitting result becomes less than ideal if the complete image line or a large interval on the line is considered. As shown in fig. 4, the deviation between the two ends of the fitting curve obtained by the larger fitting interval is larger. In fig. 4, the white line is a fitted curve, the gray line is a gray curve, roi _ width represents the width of the image input by the algorithm, the upper left represents the fitted entire gray curve, and the lower right represents 1/10 lengths of the fitted gray curve (10 pixel lengths extending to both sides with the original anchor point as the center). Therefore, the more concentrated the original positioning points in the fitting interval, the closer the fitting curve is to the gray scale curve, and the small fitting interval has the advantage that the influence caused by the imperfection at the edge of the target ROI can be avoided.
After selecting a suitable limiting interval, the embodiment of the present application takes selecting 1/10 roi _ width as an example, and the obtained result can be as shown in fig. 5, where in fig. 5(b), the line a is an energy distribution curve, and the horizontal coordinate value corresponding to the trough/minimum value of the energy distribution curve is the fine-tuned final x coordinate value. In the fitting interval, as shown in fig. 5(a), the black mark interval and the broken line are sectional lines, and in fig. 5(b), the white curve and the gray curve substantially overlap each other. In fig. 5(C), the effect is obviously improved by adjusting from the point B to the point C through the fine adjustment algorithm, and the coordinate corresponding to the point C is the coordinate point of the minimum value of the energy distribution curve in fig. 5 (B).
Wherein, in some embodiments, the curve energy is calculated by the formula:
Figure BDA0002701718650000091
wherein, F represents the elastic force corresponding to the elastic force distribution curve, F represents the curve function of the elastic force distribution curve, and x represents the coordinate value in the curve function.
Optionally, in some embodiments, the calculation formula of the image energy is:
Figure BDA0002701718650000092
wherein E isimageAs image energy, GσIs a gaussian distribution with standard deviation equal to σ, and I is the gray value of the image.
Thus, summing the curve energy and the energy of the target image yields the energy of the line-transversal, i.e.:
Figure BDA0002701718650000093
wherein, α and β are weight coefficients, the sum of the two is 1, the size is usually determined and adjusted according to the situation, and the two are five-five open, i.e. both are 0.5 without description.
In step S104, a total energy distribution curve is generated according to the energy of the sectional line, and final coordinate data of the target is identified according to the total energy distribution curve.
It can be understood that the energy of the transversal line can be calculated through the above step S103, and thus, by adding the energy of the transversal line in the X-axis direction and generating the total energy distribution curve in the X-axis direction as follows,
Ehorizon=Eup+Edown
similarly, the energy of the sectional lines in the Y-axis direction is added to generate a total energy distribution curve in the Y-axis direction. Final coordinate data of the target is determined based on the minimum value of the total energy distribution curve in the X-axis direction and the minimum value of the total energy distribution curve in the Y-axis direction. That is, in the embodiment of the present application, the point (x', y) | min (E) at which the minimum energy in the horizontal direction is located can be obtainedhorizon) And similarly, the central point in the vertical direction can be obtained, and the final coordinate data (x ', y') of the target is finally determined.
In order to further understand the target image-based localization method according to the embodiments of the present application, the following description is further provided with reference to fig. 6 to 9.
For example, as shown in fig. 6, fig. 6(a) is an original positioning diagram, fig. 6(B) is a diagram after amplification of the target RIO, where point B is an original positioning point obtained by an original positioning algorithm using angle detection and ellipse fitting, point C is a point after fine adjustment according to the embodiment of the present application, and fig. 6(C) is an ideal target pattern.
Specifically, 1K samples collected by the measuring equipment are verified, and the original fixed precision and the precision after the fine tuning model processing are compared. As can be seen from fig. 6(B), the point C is significantly better than the point B, and the deviation of the original positioning point, i.e. the distance between the point B and the center point of the ideal target, is mainly due to the decrease of image quality, for example, the original center point of fig. 6(B) is corroded by white, which becomes 2 fuzzy angles, and the original crossed cross straight line also becomes two parallel lines, and under different illumination and different distances, the image blurring degrees are different, which results in the uncertainty of positioning accuracy.
As shown in fig. 7, fig. 7 is a statistical comparison diagram, fig. 7(a) is a graph of the original positioning accuracy as a function of the measured distance, bar D represents 4-magnification (digital zoom), and bar E represents 8-magnification. The positioning error increases with increasing distance, and within 15 meters, the physical error is within 2mm (i.e. the pixel error x the physical length mm represented by a single pixel). FIG. 7(b) shows the error distribution after fine adjustment, and it can be clearly seen that the error is stabilized within 0.6mm in the range of 1-15 m, and is 0.46mm on average. Compared with the original positioning, the precision is improved by 4 times. The dashed line in fig. 7(b) represents the error trend line, and it can be seen that the error does not change with increasing distance and is quite stable.
It can be seen that the accuracy improvement after the fine adjustment is very obvious.
As shown in fig. 8 and 9, fig. 8 is a detailed flowchart of the trimming algorithm, and fig. 9 is a simplified diagram of the principle of the trimming algorithm. Wherein the sources of the ROI and the original location coordinates are not considered by the algorithm.
Specifically, the embodiment of the present application may first obtain a target image and initial coordinate data (x, y) of a target, wherein an ROI region of interest may be extracted from a target imaging image, and obtain a gray image of the ROI region, and generate an elastic force distribution curve through a preset mechanical model by using the gray image and the initial coordinate data as input, that is, corresponding to an input layer in fig. 8 and a constructed mechanical model in fig. 9, then obtain the elastic force distribution curve by intercepting the gray curve on a section line of the gray image and calculating a first derivative of the gray value curve, perform smooth fitting on the elastic force distribution curve, perform interval limitation processing, generate an energy distribution curve, that is, corresponding to an algorithm core in fig. 8 and a selected fitting limit region in fig. 9, and finally output final coordinate data of the target through an energy minimum value point, i.e. corresponding to the output layer in fig. 8 and the energy minimum point and final positioning coordinates in fig. 9. Therefore, under the condition of not changing the imaging quality of the input target, the positioning accuracy is improved through the additional post-processing module, the algorithm processing speed is 100ms/image, the measurement speed is basically not greatly influenced, and the method is suitable for being used in a mobile embedded platform.
According to the positioning method based on the target image, provided by the embodiment of the application, the elastic force distribution curve of the target image and the initial coordinate data is extracted, the energy distribution curve is generated after the elastic force distribution curve is subjected to smooth fitting, and then the final coordinate data of the target is identified according to the energy distribution curve, the imaging quality of the input target is not required to be changed through the related technology, so that the technical problems of more time consumption, higher cost, high requirement on image quality and the like in the related technology during target image positioning are solved, the target positioning precision can be improved, and the data processing efficiency and the target image positioning speed can be greatly improved.
Next, a target image-based positioning device proposed according to an embodiment of the present application is described with reference to the drawings.
Fig. 10 is a block diagram of a target image-based positioning device according to an embodiment of the present application.
As shown in fig. 10, the target image-based positioning apparatus 10 includes: an acquisition module 100, an extraction module 200, a calculation module 300 and an identification module 400.
The acquiring module 100 acquires a target image and initial coordinate data of a target;
the extraction module 200 generates a section line on the target image according to the initial coordinate data, and extracts a gray value curve of the section line;
the calculating module 300 is configured to calculate the energy of the transversal line by using an energy function based on the gray value curve;
the identification module 400 is configured to generate a total energy distribution curve according to the energy of the section line and identify final coordinate data of the target according to the total energy distribution curve.
Optionally, in some embodiments, the extraction module 200 comprises:
and the generating unit is used for generating two sections in the X-axis direction and two sections in the Y-axis direction on the target image after adding or subtracting a preset pixel value from the initial coordinate data.
Optionally, in some embodiments, the calculation module 300 comprises: a first calculation unit and a second calculation unit.
The first calculation unit is used for generating an elastic force distribution curve through a preset mechanical model based on the gray value curve and calculating curve energy of the elastic force distribution curve;
a second calculation unit for calculating an energy of the target image;
and the acquisition unit is used for summing the curve energy and the energy of the target image to obtain the energy of the transversal line.
Optionally, in some embodiments, the curve energy is calculated by the formula:
Figure BDA0002701718650000121
wherein F is an elastic force.
Optionally, in some embodiments, the first computing unit is further configured to:
and calculating the first derivative of the gray value curve to obtain an elastic force distribution curve.
Optionally, in some embodiments, the calculation formula of the image energy is:
Figure BDA0002701718650000131
wherein E isimageAs image energy, GσIs the standard deviation and I is the gray value of the image.
Optionally, in some embodiments, the computing module 300 further comprises:
and the fitting unit is used for smoothly fitting the gray value curve and calculating the energy of the transversal line by adopting an energy function based on the curve subjected to smooth fitting.
Optionally, in some embodiments, the fitting unit includes:
taking an original positioning point obtained by the initial coordinate data as a center, and intercepting any section of interval to obtain a fitting limit interval;
and smoothly fitting the gray value curve according to the fitting limit interval.
Optionally, in some embodiments, the identification module 400 comprises:
and a determination unit for adding the energies of the sectional lines in the X-axis direction and generating a total energy distribution curve in the X-axis direction, adding the energies of the sectional lines in the Y-axis direction and generating a total energy distribution curve in the Y-axis direction, and determining final coordinate data of the target based on the minimum value of the total energy distribution curve in the X-axis direction and the minimum value of the total energy distribution curve in the Y-axis direction.
It should be noted that the foregoing explanation of the embodiment of the target image-based positioning method is also applicable to the target image-based positioning apparatus of this embodiment, and is not repeated here.
According to the positioning device based on the target image, provided by the embodiment of the application, the elastic force distribution curve of the target image and the initial coordinate data is extracted, the energy distribution curve is generated after the elastic force distribution curve is subjected to smooth fitting, and then the final coordinate data of the target is identified according to the energy distribution curve, the imaging quality of the input target is not required to be changed through the correlation technique, the technical problems that the time consumption is large, the cost is high, the requirement on the image quality is high and the like when the target image is positioned in the correlation technique are solved, the target positioning precision can be improved, and the data processing efficiency and the target image positioning speed can be greatly improved.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 1201, a processor 1202, and a computer program stored on the memory 1201 and executable on the processor 1202.
The processor 1202, when executing the program, implements the target image-based localization method provided in the above-described embodiments.
Further, the electronic device further includes:
a communication interface 1203 for communication between the memory 1201 and the processor 1202.
A memory 1201 for storing computer programs executable on the processor 1202.
The memory 1201 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 1201, the processor 1202 and the communication interface 1203 are implemented independently, the communication interface 1203, the memory 1201 and the processor 1202 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 11, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 1201, the processor 1202, and the communication interface 1203 are integrated on a chip, the memory 1201, the processor 1202, and the communication interface 1203 may complete mutual communication through an internal interface.
Processor 1202 may be a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, wherein the program is executed by a processor to implement the above target image-based localization method.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of implementing the embodiments of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A positioning method based on a target image is characterized by comprising the following steps:
acquiring a target image and initial coordinate data of a target;
generating a section line on the target image according to the initial coordinate data, and extracting a gray value curve of the section line;
calculating the energy of the transversal line by adopting an energy function based on the gray value curve;
generating a total energy distribution curve according to the energy of the sectional line, and identifying final coordinate data of the target according to the total energy distribution curve.
2. The method of claim 1, wherein the generating a section line on the target image according to the initial coordinate data comprises: and adding a preset pixel value to the initial coordinate data to generate two sections of lines in the X-axis direction and two sections of lines in the Y-axis direction on the target image.
3. The method of claim 1, wherein said calculating the energy of the stub using an energy function based on the gray value profile comprises:
generating an elastic force distribution curve through a preset mechanical model based on the gray value curve, and calculating curve energy of the elastic force distribution curve;
calculating an energy of the target image;
and summing the curve energy and the energy of the target image to obtain the energy of the transversal line.
4. The method of claim 3, wherein generating an elastic force distribution curve based on the gray value curve through a preset mechanical model comprises:
and intercepting a gray curve on the sectional line of the gray image, and calculating a first derivative of the gray curve to obtain the elastic force distribution curve.
5. The method of claim 4, wherein the curve energy is calculated by the formula:
Figure FDA0002701718640000021
wherein, F represents the elastic force corresponding to the elastic force distribution curve, F represents the curve function of the elastic force distribution curve, and x represents the coordinate value in the curve function.
6. The method of claim 3, wherein the image energy is calculated by the formula:
Figure FDA0002701718640000022
wherein E isimageAs image energy, GσIs the standard deviation and I is the gray value of the image.
7. The method of claim 1, wherein said calculating the energy of the stub using an energy function based on the gray value profile comprises:
and smoothly fitting the gray value curve, and calculating the energy of the transversal line by adopting an energy function based on the curve after smooth fitting.
8. The method of claim 7, wherein smoothly fitting the gray value curve comprises:
taking an original positioning point obtained by the initial coordinate data as a center, and intercepting any section of interval to obtain the fitting limit interval;
and performing smooth fitting on the gray value curve according to the fitting limit interval.
9. The method of claim 1, wherein generating a total energy profile from the energy of the transversal line and identifying final coordinate data of the target from the total energy profile comprises:
adding the energies of the transversal lines in the X-axis direction and generating a total energy distribution curve in the X-axis direction, adding the energies of the transversal lines in the Y-axis direction and generating a total energy distribution curve in the Y-axis direction, and determining final coordinate data of the target based on the minimum value of the total energy distribution curve in the X-axis direction and the minimum value of the total energy distribution curve in the Y-axis direction.
10. A target image-based positioning device, comprising:
the acquisition module acquires a target image and initial coordinate data of a target;
the extraction module generates a section line on the target image according to the initial coordinate data and extracts a gray value curve of the section line;
the calculating module is used for calculating the energy of the section line by adopting an energy function based on the gray value curve;
and the identification module is used for generating a total energy distribution curve according to the energy of the section line and identifying the final coordinate data of the target according to the total energy distribution curve.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the target image based localization method of any one of claims 1-9.
12. A computer-readable storage medium, on which a computer program is stored, which program is executable by a processor for implementing the target image based localization method according to any one of claims 1-9.
CN202011024413.5A 2020-09-25 2020-09-25 Positioning method and device based on target image Pending CN114255272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011024413.5A CN114255272A (en) 2020-09-25 2020-09-25 Positioning method and device based on target image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011024413.5A CN114255272A (en) 2020-09-25 2020-09-25 Positioning method and device based on target image

Publications (1)

Publication Number Publication Date
CN114255272A true CN114255272A (en) 2022-03-29

Family

ID=80789198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011024413.5A Pending CN114255272A (en) 2020-09-25 2020-09-25 Positioning method and device based on target image

Country Status (1)

Country Link
CN (1) CN114255272A (en)

Similar Documents

Publication Publication Date Title
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN111263142B (en) Method, device, equipment and medium for testing optical anti-shake of camera module
CN114359841B (en) Video water level identification method based on space-time average
CN104732207B (en) High-precision positions the method and device of pcb board Mark points with high noise immunity
CN111144213B (en) Object detection method and related equipment
CN111354047B (en) Computer vision-based camera module positioning method and system
CN111259890A (en) Water level identification method, device and equipment of water level gauge
CN109583365A (en) Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve
CN113284189B (en) Distortion parameter calibration method, device, equipment and storage medium
CN112284256A (en) Method and system for measuring plane abrasion of workpiece
CN112489140A (en) Attitude measurement method
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN113160223A (en) Contour determination method, contour determination device, detection device and storage medium
CN110986887A (en) Object size detection method, distance measurement method, storage medium and monocular camera
CN113888583A (en) Real-time judgment method and device for visual tracking accuracy
CN113658279A (en) Camera internal parameter and external parameter estimation method and device, computer equipment and storage medium
CN117496467A (en) Special-shaped lane line detection method based on fusion of monocular camera and 3D LIDAR
CN114255272A (en) Positioning method and device based on target image
CN112085752A (en) Image processing method, device, equipment and medium
CN116819561A (en) Point cloud data matching method, system, electronic equipment and storage medium
CN111028296A (en) Method, device, equipment and storage device for estimating focal length value of dome camera
CN114820782B (en) Line laser profiler point cloud calibration method and system based on plane calibration block
GB2470741A (en) Liquid level detection method
JP7152506B2 (en) Imaging device
CN112233063A (en) Circle center positioning method for large-size round object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination