CN112446229A - Method and device for acquiring pixel coordinates of marker post - Google Patents

Method and device for acquiring pixel coordinates of marker post Download PDF

Info

Publication number
CN112446229A
CN112446229A CN201910796083.2A CN201910796083A CN112446229A CN 112446229 A CN112446229 A CN 112446229A CN 201910796083 A CN201910796083 A CN 201910796083A CN 112446229 A CN112446229 A CN 112446229A
Authority
CN
China
Prior art keywords
pixel coordinate
frame image
predicted
coordinate
marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910796083.2A
Other languages
Chinese (zh)
Other versions
CN112446229B (en
Inventor
杨帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201910796083.2A priority Critical patent/CN112446229B/en
Priority claimed from CN201910796083.2A external-priority patent/CN112446229B/en
Publication of CN112446229A publication Critical patent/CN112446229A/en
Application granted granted Critical
Publication of CN112446229B publication Critical patent/CN112446229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are a method and an apparatus for acquiring pixel coordinates of a signpost, a computer-readable storage medium, and an electronic device, the method including: acquiring a first pixel coordinate corresponding to a first reference point of a marker rod in the current frame image according to a first detection frame corresponding to the marker rod in the current frame image; acquiring a second pixel coordinate corresponding to a second reference point of the marker rod in the previous frame image according to a second detection frame corresponding to the marker rod in the previous frame image; according to the second pixel coordinate, acquiring a corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image; and acquiring the optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate. According to the method, the predicted pixel coordinate of the marker post in the current frame image is predicted according to the pixel coordinate corresponding to the marker post in the previous frame image, the optimized pixel coordinate is obtained by utilizing the predicted pixel coordinate and the pixel coordinate corresponding to the marker post in the current frame image, and the accuracy of the obtained optimized pixel coordinate is high.

Description

Method and device for acquiring pixel coordinates of marker post
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for acquiring pixel coordinates of a marker post.
Background
The signpost is an important landmark in a road scene, is an essential element for environmental perception and positioning in the unmanned driving process, and in order to ensure the effects of environmental perception and positioning in the unmanned driving process, the signpost in an image is often required to be accurately detected and tracked, that is, the relatively accurate pixel coordinates of the signpost on the image are required to be acquired. However, most of the marker posts in the road scene are thin vertical posts, which reflect the width of only a few pixels in the image, so that the pixel coordinate accuracy of the acquired marker posts is low.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present disclosure provide a method and an apparatus for obtaining pixel coordinates of a marker post, a computer-readable storage medium, and an electronic device, which predict predicted pixel coordinates of the marker post in a current frame image according to pixel coordinates corresponding to the marker post in a previous frame image, and obtain optimized pixel coordinates by using the predicted pixel coordinates and the pixel coordinates corresponding to the marker post in the current frame image, where the obtained optimized pixel coordinates are high in accuracy.
According to a first aspect of the present disclosure, there is provided a pixel coordinate acquisition method of a marker post, including:
acquiring a first pixel coordinate corresponding to a first reference point of a marker rod in a current frame image according to a first detection frame corresponding to the marker rod in the current frame image;
acquiring a second pixel coordinate corresponding to a second reference point of the marker rod in the previous frame image according to a second detection frame corresponding to the marker rod in the previous frame image;
according to the second pixel coordinate, acquiring a corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image;
and obtaining an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
According to a second aspect of the present disclosure, there is provided a pixel coordinate acquisition apparatus of a marker post, including:
the first coordinate acquisition module is used for acquiring a first pixel coordinate corresponding to a first reference point of a marker post in a current frame image according to a first detection frame corresponding to the marker post in the current frame image;
the second coordinate acquisition module is used for acquiring a second pixel coordinate corresponding to a second reference point of the marker post in the previous frame image according to a second detection frame corresponding to the marker post in the previous frame image;
the predicted coordinate acquisition module is used for acquiring the corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image according to the second pixel coordinate;
and the coordinate optimization module is used for acquiring an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described marker bar pixel coordinate acquisition method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus comprising:
a processor;
a memory for storing the processor-executable instructions;
and the processor is used for reading the executable instruction from the memory and executing the instruction to realize the pixel coordinate acquisition method of the marker post.
Compared with the prior art, the method and the device for acquiring the pixel coordinate of the marker post, the computer-readable storage medium and the electronic device provided by the disclosure at least have the following beneficial effects:
on one hand, in the embodiment, it is considered that most of the marker bars in the road scene are thin vertical bars, which reflects that the width of only a few pixels may be in the image, so that the detection frame corresponding to the marker bar in the current frame image is determined to determine the pixel coordinates corresponding to the reference point of the marker bar in the current frame image, the predicted pixel coordinates corresponding to the reference point of the marker bar in the current frame image are further predicted according to the pixel coordinates corresponding to the reference point of the marker bar in the previous frame image, and then the pixel coordinates corresponding to the marker bar in the current frame image are optimized according to the predicted pixel coordinates of the marker bar in the current frame image to obtain the optimized pixel coordinates, thereby avoiding directly determining the pixel coordinates of the marker bar by using the current frame image, and enabling the accuracy of the obtained optimized pixel coordinates to be high.
On the other hand, the pixel coordinates of the marker post obtained in this embodiment are higher in accuracy, and the effect is better when the pixel coordinates are used for detecting and tracking the marker post, so that the environmental perception and positioning effect are ensured when the vehicle is not driven by people.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart of a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 2 is a scene schematic diagram of a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 3 is a schematic flowchart of step 10 in a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic flowchart of step 40 in a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a step 41 in a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another step 41 in a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic flowchart of step 4111 in a method for obtaining pixel coordinates of a marker bar according to an exemplary embodiment of the present disclosure;
fig. 8 is a flowchart illustrating step 42 of a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a pixel coordinate acquisition device of a marker post according to a first exemplary embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a pixel coordinate acquiring apparatus of a marker post according to a second exemplary embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a coordinate matching unit in a pixel coordinate acquisition device of a sign post according to an exemplary embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of another coordinate matching unit in the pixel coordinate acquiring apparatus of the signpost according to an exemplary embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a coordinate optimization unit in a pixel coordinate acquisition apparatus of a signpost according to an exemplary embodiment of the present disclosure;
fig. 14 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
Summary of the application
The sign post is used as an important landmark of a road scene, is an essential element for unmanned environment perception and positioning, and is often required to accurately detect and track the sign post in an image in order to ensure the effect of environment perception and positioning during unmanned driving, however, most of the sign posts in the road scene are thin vertical posts, the width of only a few pixels possibly exists on the image in response, and compared with other target object semantic information such as a sign board and a traffic light, the semantic information is weak, so that the pixel coordinate accuracy of the obtained sign post is low, and the effect of detecting and tracking the sign post is poor.
In the method for obtaining the pixel coordinate of the marker post provided in this embodiment, the detection frame corresponding to the marker post in the current frame image is determined to determine the pixel coordinate corresponding to the reference point of the marker post in the current frame image, the predicted pixel coordinate of the marker post in the current frame image is further predicted according to the pixel coordinate of the reference point of the marker post in the previous frame image, and then the pixel coordinate corresponding to the marker post in the current frame image is optimized according to the predicted pixel coordinate of the marker post in the current frame image to obtain the optimized pixel coordinate, so that the pixel coordinate of the marker post is prevented from being determined by directly using the current frame image, and the accuracy of the obtained optimized pixel coordinate is high. Moreover, the pixel coordinate accuracy of the marker post obtained in this embodiment is high, and the effect is good when the pixel coordinate is used for detecting and tracking the marker post, so that the environmental perception and the positioning effect are ensured when the vehicle is not driven by people.
Exemplary method
Fig. 1 is a schematic flowchart of a method for acquiring pixel coordinates of a marker post according to an exemplary embodiment of the present disclosure.
The embodiment can be applied to electronic equipment, and particularly can be applied to a server or a general computer. As shown in fig. 1, an exemplary embodiment of the present disclosure provides a method for acquiring pixel coordinates of a marker post, including at least the following steps:
and step 10, acquiring a first pixel coordinate corresponding to a first reference point of the marker post in the current frame image according to a first detection frame corresponding to the marker post in the current frame image.
As shown in a of fig. 2, during the running of the vehicle on the road, the vision sensor mounted on the vehicle will capture images, obtaining a series of continuous frame images, and when there is a marker bar in the field of view of the vision sensor, the marker bar will appear in the images according to the imaging principle shown in B of fig. 2. After an image is obtained, firstly, detecting and tracking a marker post on the image, obtaining a current frame image with the marker post, wherein the current frame image is a latest image collected by a vision sensor at the current moment, determining a first detection frame corresponding to the marker post in the current frame image, the first detection frame is used for indicating rough position information of the marker post in the current frame image, and further obtaining a first pixel coordinate corresponding to a first reference point of the marker post in the current frame image according to the first detection frame, the first reference point is a pixel point which is determined according to the first detection frame and possibly corresponds to the marker post, and the first pixel coordinate indicates the position information of the first reference point in the current frame image.
And 20, acquiring a second pixel coordinate corresponding to a second reference point of the marker post in the previous frame image according to a second detection frame corresponding to the marker post in the previous frame image.
The previous frame image is an image collected by the vision sensor before the current frame image, a second detection frame corresponding to the marker rod in the previous frame image is determined, the second detection frame is used for indicating rough position information of the marker rod in the previous frame image, a second pixel coordinate corresponding to a second reference point of the marker rod in the previous frame image is determined according to the second detection frame, the second reference point is a pixel point which is determined according to the second detection frame and possibly corresponds to the marker rod, and the second pixel coordinate indicates the position information of the second reference point in the previous frame image.
It should be noted that, when the vision sensor collects the image in real time, as the vision sensor continuously collects the latest current frame image, the current frame image at the previous moment becomes the previous frame image, and therefore, the first reference point and the second reference point, and the first pixel coordinate and the second pixel coordinate are substantially the same, and are only used for convenience of distinguishing.
And step 30, acquiring the corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image according to the second pixel coordinate.
After a second pixel coordinate corresponding to a second reference point of the marker post in the previous frame image is determined, the pixel coordinate of the second reference point in the current frame image is predicted according to the second pixel coordinate, so that a predicted pixel coordinate is obtained, namely the predicted pixel coordinate is a virtual pixel coordinate predicted according to the second pixel coordinate in the previous frame image and is not a pixel coordinate really corresponding to the marker post in the current frame image. Specifically, when the current frame is acquired, the prediction pixel coordinate generated by the second pixel coordinate of the previous frame image may be obtained in the kalman filter.
And step 40, acquiring an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
Because the marker post may have a width of only a few pixels in the image, the accuracy of the directly determined first pixel coordinate corresponding to the first reference point of the marker post in the current frame image is low, the first pixel coordinate needs to be optimized according to the virtual predicted pixel coordinate in the current frame image, the optimized first pixel coordinate is obtained, and the accuracy of the optimized first pixel coordinate is high.
The method for acquiring the pixel coordinates of the marker post provided by the embodiment at least has the following beneficial effects:
on one hand, in the embodiment, a first detection frame corresponding to a marker rod in a current frame image is determined to determine a first pixel coordinate corresponding to a first reference point of the marker rod in the current frame image, a predicted pixel coordinate corresponding to the marker rod in the current frame image is further predicted according to a second pixel coordinate corresponding to a second reference point of the marker rod in the previous frame image, and then the first pixel coordinate corresponding to the marker rod in the current frame image is optimized according to the predicted pixel coordinate of the marker rod in the current frame image to obtain the optimized pixel coordinate, so that the pixel coordinate of the marker rod is prevented from being determined by directly using the current frame image, and the accuracy of the obtained optimized pixel coordinate is high.
On the other hand, the pixel coordinates of the marker post obtained in this embodiment are higher in accuracy, and the effect is better when the pixel coordinates are used for detecting and tracking the marker post, so that the environmental perception and positioning effect are ensured when the vehicle is not driven by people.
Fig. 3 is a schematic flow chart illustrating the process of obtaining the first pixel coordinate corresponding to the first reference point of the marker bar in the current frame image according to the first detection frame corresponding to the marker bar in the current frame image in the embodiment shown in fig. 1.
As shown in fig. 3, on the basis of the embodiment shown in fig. 1, in an exemplary embodiment of the present application, the step 10 of acquiring a first pixel coordinate corresponding to a first reference point of a marker bar in a current frame image may specifically include the following steps:
and step 11, determining a first detection frame corresponding to the marker post in the current frame image according to the semantic information of the current frame image.
After the current frame image is obtained, performing semantic segmentation on the current frame image, namely segmenting different objects in the current frame image from the angle of pixels according to the content of the current frame image to determine semantic information carried by pixel points in the current frame image, further determining pixel points of which the semantic information corresponds to a marker post, and determining upper, lower, left and right boundaries of the marker post on the image according to the pixel points of which the semantic information corresponds to the marker post in the current frame image to obtain a first detection frame corresponding to the marker post in the current frame image, namely a minimum enclosure frame of the marker post in the image. Specifically, a semantic segmentation graph corresponding to the current frame image can be obtained through the semantic segmentation convolutional neural network, and semantic information carried by each pixel point is determined.
And step 12, extracting line segments of the current frame image according to the first detection frame, and determining a first reference point of the marker post in the current frame image.
Based on the characteristic that the marker post is in a natural line state, when the line segment of the current frame image is extracted according to the first detection frame, the extracted line segment corresponds to the marker post in the current frame image, so that more accurate position information of the marker post in the current frame image can be obtained, points are selected on the extracted line segment, and a first reference point of the marker post in the current frame image is determined, namely the first reference point corresponds to any one or more pixel points on the extracted line segment.
It should be noted that, when performing semantic segmentation on an image, the accuracy of semantic segmentation maps obtained by different algorithms is also different, so that when performing line segment extraction on a current frame image according to a first detection frame, setting can be performed according to actual conditions. For example, when the semantic segmentation result is accurate, all the pixels of the marker post in the current frame image correspond to the first detection frame, and the segment extraction can be performed only in the inner region of the first detection frame; when the semantic segmentation result is not accurate, if some pixel points of the marker post are located outside the first detection frame, it is necessary to extract a segment from an area within a certain distance range of the first detection frame in the internal area of the first detection frame to ensure the integrity of the extracted marker post, so that a user can determine the area range of the segment extraction according to the actual service scene when the segment extraction is performed according to the first detection frame.
And step 13, acquiring a first pixel coordinate corresponding to the first reference point.
After the first reference point is determined, a first pixel coordinate corresponding to the first reference point may be determined.
It should be noted that the previous frame images are all current frame images from different moments, and therefore the second pixel coordinate in the previous frame image is used for performing semantic segmentation on the previous frame image to determine semantic information carried by each pixel point in the previous frame image, and further determining a second detection frame corresponding to the marker post in the previous frame image according to the pixel point corresponding to the marker post in accordance with the semantic information, and performing line segment extraction on the previous frame image based on the second detection frame to determine a second pixel coordinate corresponding to a second reference point of the marker post in the previous frame image, so that the second pixel coordinate in the previous frame image can also more accurately represent the position information of the marker post in the previous frame image.
In this embodiment, after the first detection frame is determined by using semantic information carried by each pixel point in the current frame image, the line segment extraction is performed on the current frame image based on the first detection frame, so that a relatively accurate first pixel coordinate of the marker post can be determined, and the accuracy of the finally obtained optimized first pixel coordinate is further ensured.
Fig. 4 shows a schematic flow chart of obtaining the optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate in the embodiment shown in fig. 1.
As shown in fig. 4, based on the embodiment shown in fig. 1, in an exemplary embodiment of the present application, the obtaining the optimized first pixel coordinate shown in step 40 may specifically include the following steps:
and step 41, determining the matched first pixel coordinate and the predicted pixel coordinate in the first pixel coordinate and the predicted pixel coordinate.
More than one marker post often exists in a current frame image and a previous frame image, so that after m first pixel coordinates in the current frame image are determined, the m first pixel coordinates are matched with n predicted pixel coordinates, only when the first pixel coordinates are optimized according to the matched first pixel coordinates and the predicted pixel coordinates, the obtained first pixel coordinates can be more accurate, and therefore the first pixel coordinates and the predicted pixel coordinates need to be matched to determine the matched first pixel coordinates and the predicted pixel coordinates.
It should be noted that there is a marker bar appearing in the current frame image for the first time, that is, there is no corresponding predicted pixel coordinate in the marker bar, and it can be seen that the number m of first pixel coordinates is not necessarily the same as the number n of predicted pixel coordinates.
Specifically, when a first frame image is acquired, the marker bars in the first frame image are detected, pixel coordinates corresponding to reference points of 3 marker bars are determined, and a tracking id with a unique identifier is allocated to each marker bar; after the second frame image is determined, the corresponding predicted pixel coordinates of the reference points of the 3 marker rods in the first frame image in the second frame image are determined, the marker rods existing in the second frame image are detected, the pixel coordinates corresponding to the reference points of the 4 marker rods are determined, the pixel coordinates matched with the predicted pixel coordinates are determined in the second frame image by using a Hungarian assignment algorithm, and new tracking ids are reallocated for the marker rods which are not successfully matched in the second frame image.
And 42, acquiring an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate.
And after the matched first pixel coordinate and the predicted pixel coordinate are obtained, obtaining an optimized first pixel coordinate according to the first pixel coordinate matched with the predicted pixel coordinate. Specifically, a Kalman filter is updated according to a first pixel coordinate matched with the predicted pixel coordinate, the updated Kalman filter is used for determining the optimized first pixel coordinate, the Kalman filter combines with historical tracking records, and a residual error between previous frame image detection and current frame image detection is adjusted so as to correspond to missing detection and error detection and better match tracking id.
In this embodiment, it is considered that more than one marker bar may exist in the current frame image and the previous frame image, so that the first pixel coordinate and the predicted pixel coordinate are matched to determine the matched first pixel coordinate and the predicted pixel coordinate, the first pixel coordinate is selected through a matching operation, the first pixel coordinate with higher accuracy is selected, the optimized first pixel coordinate is obtained according to the first pixel coordinate matched with the predicted pixel coordinate, and the accuracy of the optimized first pixel coordinate is higher.
Fig. 5 shows a schematic flow chart of determining the matching first pixel coordinate and predicted pixel coordinate from the first pixel coordinate and predicted pixel coordinate in the embodiment shown in fig. 4.
As shown in fig. 5, based on the embodiment shown in fig. 4, in an exemplary embodiment of the present application, the determining the matched first pixel coordinate and predicted pixel coordinate in the first pixel coordinate and predicted pixel coordinate shown in step 41 specifically includes the following steps:
step 4111, determining a loss matrix according to a first distance between the first pixel coordinate and the predicted pixel coordinate.
Since most of the signpost is a thin vertical bar, the width of the signpost in the image may be only a few pixels, so that the aspect ratio of the minimum bounding box of the signpost in the image is small, and therefore it is difficult to use the common intersection-and-merge ratio as a measure for detecting the tracking effect. Specifically, a first distance, d, between the m first pixel coordinates and the n predicted pixel coordinates is determined1P-p' | | where d1And characterizing the first distance, p characterizing the first pixel coordinate, p' characterizing the predicted pixel coordinate, and constructing an mxn loss matrix by taking each determined first distance as an element.
And 4112, determining the matched first pixel coordinate and the predicted pixel coordinate according to the loss matrix.
And according to each first distance in the loss matrix, determining a matched first pixel coordinate and a predicted pixel coordinate which enable the sum of the first distances between the m first pixel coordinates and the n predicted pixel coordinates to be minimum.
In this embodiment, a first distance between the first pixel coordinate and the predicted pixel coordinate is used as a penalty term to determine the matched first pixel coordinate and the predicted pixel coordinate, and the determined first distance between the matched first pixel coordinate and the predicted pixel coordinate is smaller, that is, the first pixel coordinate accurately determines the predicted pixel coordinate matched with the first pixel coordinate.
Fig. 6 shows a schematic flow chart of determining the matching first pixel coordinate and predicted pixel coordinate from the first pixel coordinate and predicted pixel coordinate in the embodiment shown in fig. 5.
As shown in fig. 6, based on the embodiment shown in fig. 5, in an exemplary embodiment of the present application, the determining the matched first pixel coordinate and predicted pixel coordinate in the first pixel coordinate and predicted pixel coordinate shown in step 41 specifically includes the following steps:
step 4121, determining an angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate.
In order to make the determined matched first pixel coordinate and the predicted pixel coordinate more accurate, an included angle between a marker post corresponding to the first pixel coordinate and a marker post corresponding to the predicted pixel coordinate is added as a penalty term, the purpose of determining the included angle is to determine whether the orientations of the marker post corresponding to the predicted pixel coordinate and the marker post corresponding to the first pixel coordinate are the same, and there are cases where the first distance between the predicted pixel coordinate and the first pixel coordinate is small but the orientations of the marker posts corresponding to the predicted pixel coordinate and the first pixel coordinate are not the same, the result of the matched first pixel coordinate and predicted pixel coordinate determined only on the basis of the first distance between the predicted pixel coordinate and the first pixel coordinate is poor, therefore, for better matching results, the included angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate is determined.
Step 4122 determines a loss matrix based on the first distance and the included angle between the first pixel coordinate and the predicted pixel coordinate.
And determining a loss matrix according to a first distance between the first pixel coordinate and the predicted pixel coordinate and two penalty terms of an included angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate, so that the first distance between the matched first pixel coordinate determined by the loss matrix and the predicted pixel coordinate is smaller, the orientation of the matched first pixel coordinate is the same, and the obtained effect between the matched first pixel coordinate and the predicted pixel coordinate is better. In particular, d2| | | p-p '| + w | | | alpha-alpha' | |, wherein d2And characterizing the weighted distance, alpha characterizing the detection orientation corresponding to the marker rod in the current frame image, alpha' characterizing the predicted orientation corresponding to the marker rod in the current frame image, which is obtained according to the detection orientation corresponding to the marker rod in the previous frame image, and w characterizing the weight between the first distance and the included angle.
Step 4123, determining the first pixel coordinate and the predicted pixel coordinate according to the loss matrix.
And according to a loss matrix determined by the first distance and the included angle, determining a first pixel coordinate and a prediction pixel coordinate which are matched when the sum of the first pixel coordinate corresponding to the loss matrix and the weighted distance of the prediction pixel coordinate is minimum.
In this embodiment, orientation constraint is performed on the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate by further determining an included angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate, so that not only the first distance between the determined and matched first pixel coordinate and the predicted pixel coordinate is smaller, but also the orientation is the same, thereby ensuring that the determined and matched first pixel coordinate and the predicted pixel coordinate are more accurate.
Fig. 7 shows a schematic flow chart of determining the loss matrix according to the first distance between the first pixel coordinate and the predicted pixel coordinate in the embodiment shown in fig. 5.
As shown in fig. 7, based on the embodiment shown in fig. 5, in an exemplary embodiment of the present application, the determining the loss matrix in step 4111 may specifically include the following steps:
step 41111, determining a second distance between the starting point pixel coordinate and the starting point prediction pixel coordinate.
The first pixel coordinates include start pixel coordinates, and the predicted pixel coordinates include start pixel coordinates, then a second distance between the start pixel coordinates and the predicted pixel coordinates is determined.
Step 41112, determining a third distance between the endpoint pixel coordinate and the endpoint prediction pixel coordinate.
The first pixel coordinate comprises an end point pixel coordinate, the predicted pixel coordinate comprises an end point pixel coordinate, and a third distance between the end point pixel coordinate and the end point predicted pixel coordinate is determined.
Step 41113, determining a loss matrix according to the second distance and the third distance.
And a penalty item of the distance is formed by the second distance and the third distance, a loss matrix is determined by using the second distance and the third distance, and the transverse distance between the predicted pixel coordinate and the first pixel coordinate respectively corresponding to the two ends of the marker post is restrained, so that the accuracy of the matched first pixel coordinate and the predicted pixel coordinate determined by using the loss matrix is higher. In particular, d3The method comprises the following steps of | | | | p _ start-p _ start '| + | | p _ end-p _ end' | |, wherein p _ start represents a starting point pixel coordinate, p _ start 'represents a starting point prediction pixel coordinate, p _ end represents an end point pixel coordinate, and p _ end' represents an end point prediction pixel coordinate; of course, the second distance and the third distance may be used together as a distance penalty term, and an angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate is determined as an angle penalty term, that is, d4And determining the result of the matched first pixel coordinate and the predicted pixel coordinate more accurately according to the result of the matched first pixel coordinate and the predicted pixel coordinate.
In this embodiment, the first pixel coordinates include a start pixel coordinate and an end pixel coordinate, the predicted pixel coordinates include a start predicted pixel coordinate and an end predicted pixel coordinate, a loss matrix is determined by using a first distance between the start pixel coordinate and the start predicted pixel coordinate and a second distance between the end pixel coordinate and the end predicted pixel coordinate, and the accuracy of the matched first pixel coordinate and the predicted pixel coordinate determined by using the loss matrix is higher.
Fig. 8 shows a schematic flowchart of obtaining the optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate in the embodiment shown in fig. 5.
As shown in fig. 8, based on the embodiment shown in fig. 5, in an exemplary embodiment of the present application, the obtaining the optimized first pixel coordinate shown in step 42 may specifically include the following steps:
in step 421, the matched first pixel coordinate and predicted pixel coordinate of which the first distance satisfies the preset condition are determined.
After the matched first pixel coordinate and the predicted pixel coordinate are determined by using the loss matrix, the matched first pixel coordinate and the predicted pixel coordinate with the first distance meeting the preset condition are determined, the preset condition is set to be smaller than a preset threshold, the preset threshold is a threshold of the first distance, and only when the first distance between the matched first pixel coordinate and the predicted pixel coordinate is smaller than the preset threshold, the matching effect of the matched first pixel coordinate and the predicted pixel coordinate is better, and the possibility of wrong matching is avoided.
Step 422, obtaining an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate, where the first distance meets the preset condition.
And determining a first pixel coordinate and a predicted pixel coordinate which are matched when the first distance meets the preset condition, and optimizing the first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate at the moment, so that the accuracy of the obtained optimized first pixel coordinate is higher.
In this embodiment, the matched first pixel coordinate and the predicted pixel coordinate are further screened, the matched first pixel coordinate and the predicted pixel coordinate, of which the first distance meets the preset condition, are determined from the matched first pixel coordinate and the predicted pixel coordinate, so that the accuracy of the obtained matching result of the matched first pixel coordinate and the predicted pixel coordinate is ensured, and the obtained optimized first pixel coordinate is more accurate only if the matched first pixel coordinate and the predicted pixel coordinate are accurate enough.
Exemplary devices
Based on the same conception as the method embodiment of the application, the embodiment of the application also provides a device for acquiring the pixel coordinate of the marker post.
Fig. 9 is a schematic structural diagram illustrating a pixel coordinate acquiring apparatus for a signpost according to an exemplary embodiment of the present application.
As shown in fig. 9, an exemplary embodiment of the present application provides a pixel coordinate acquiring apparatus for a signpost, including:
the first coordinate obtaining module 91 is configured to obtain a first pixel coordinate corresponding to a first reference point of a marker bar in a current frame image according to a first detection frame corresponding to the marker bar in the current frame image;
a second coordinate obtaining module 92, configured to obtain, according to a second detection frame corresponding to the marker bar in the previous frame image, a second pixel coordinate corresponding to a second reference point of the marker bar in the previous frame image;
a predicted coordinate obtaining module 93, configured to obtain, according to the second pixel coordinate, a predicted pixel coordinate of the marker post in the previous frame image, corresponding to the current frame image;
and a coordinate optimization module 94, configured to obtain an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
As shown in fig. 10, in an exemplary embodiment of the present invention, the first coordinate acquisition module 91 includes:
a detection frame determining unit 911, configured to determine, according to the semantic information of the current frame image, a first detection frame corresponding to the marker post in the current frame image;
a reference point determining unit 912, configured to perform line segment extraction on the current frame image according to the first detection frame, and determine a first reference point of the marker post in the current frame image;
a coordinate obtaining unit 913, configured to obtain a first pixel coordinate corresponding to the first reference point.
As shown in FIG. 10, in one exemplary embodiment of the invention, the coordinate optimization module 94 includes:
a coordinate matching unit 941, configured to determine a first pixel coordinate and a predicted pixel coordinate that are matched in the first pixel coordinate and the predicted pixel coordinate;
a coordinate optimization unit 942 is configured to obtain an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate.
As shown in fig. 11, in an exemplary embodiment of the present invention, the coordinate matching unit 941 includes:
a first matrix determination subunit 94111, configured to determine a loss matrix according to a first distance between the first pixel coordinate and the predicted pixel coordinate;
a first coordinate matching subunit 94112, configured to determine a first matched pixel coordinate and a predicted pixel coordinate according to the loss matrix.
In an embodiment of the present invention, the first pixel coordinates include a start point pixel coordinate and an end point pixel coordinate, the predicted pixel coordinates include a start point predicted pixel coordinate and an end point pre-pixel coordinate, and the first matrix determination subunit 94111 is configured to perform the following steps:
determining a second distance between the starting point pixel coordinate and the starting point prediction pixel coordinate;
determining a third distance between the endpoint pixel coordinate and the endpoint prediction pixel coordinate;
and determining a loss matrix according to the second distance and the third distance.
As shown in fig. 12, in an exemplary embodiment of the present invention, the coordinate matching unit 941 includes:
an included angle determining subunit 94121, configured to determine an included angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate;
a second matrix determination subunit 94122, configured to determine a loss matrix according to a first distance and an included angle between the first pixel coordinate and the predicted pixel coordinate;
a second coordinate matching subunit 94123, configured to determine the matched first pixel coordinate and predicted pixel coordinate according to the loss matrix.
In one embodiment of the invention, the first pixel coordinates comprise a start point pixel coordinate and an end point pixel coordinate, the predicted pixel coordinates comprise a start point predicted pixel coordinate and an end point pre-pixel coordinate, and the second matrix determination subunit 94122 is configured to perform the following steps:
determining a second distance between the starting point pixel coordinate and the starting point prediction pixel coordinate;
determining a third distance between the endpoint pixel coordinate and the endpoint prediction pixel coordinate;
and determining a loss matrix according to the second distance, the third distance and the included angle.
As shown in fig. 13, in an exemplary embodiment of the present invention, the coordinate optimization unit 942 includes:
a distance determining subunit 9421 configured to determine matched first pixel coordinates and predicted pixel coordinates for which the first distance satisfies a preset condition;
a coordinate optimization subunit 9422, configured to obtain an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate, where the first distance satisfies a preset condition.
Exemplary electronic device
FIG. 14 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 14, the electronic device 100 includes one or more processors 101 and memory 102.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
Memory 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 101 to implement the pixel coordinate acquisition method of the signpost of the various embodiments of the present application described above and/or other desired functions.
In one example, the electronic device 100 may further include: an input device 103 and an output device 104, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
Of course, for the sake of simplicity, only some of the components related to the present application in the electronic apparatus 100 are shown in fig. 14, and components such as a bus, an input/output interface, and the like are omitted. In addition, electronic device 100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of pixel coordinate acquisition of a signpost according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of acquiring pixel coordinates of a signpost according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A pixel coordinate acquisition method of a marker post, comprising:
acquiring a first pixel coordinate corresponding to a first reference point of a marker rod in a current frame image according to a first detection frame corresponding to the marker rod in the current frame image;
acquiring a second pixel coordinate corresponding to a second reference point of the marker rod in the previous frame image according to a second detection frame corresponding to the marker rod in the previous frame image;
according to the second pixel coordinate, acquiring a corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image;
and obtaining an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
2. The method of claim 1, wherein said obtaining an optimized first pixel coordinate from said first pixel coordinate and said predicted pixel coordinate comprises:
determining a first pixel coordinate and a predicted pixel coordinate which are matched in the first pixel coordinate and the predicted pixel coordinate;
and obtaining an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate.
3. The method of claim 2, wherein said determining a matching first pixel coordinate and predicted pixel coordinate among said first pixel coordinate and said predicted pixel coordinate comprises:
determining a loss matrix according to a first distance between the first pixel coordinate and the predicted pixel coordinate;
and determining the matched first pixel coordinate and the predicted pixel coordinate according to the loss matrix.
4. The method of claim 3, wherein the first pixel coordinates comprise a start point pixel coordinate and an end point pixel coordinate, and the predicted pixel coordinates comprise a start point predicted pixel coordinate and an end point predicted pixel coordinate, and determining the loss matrix based on a first distance between the first pixel coordinate and the predicted pixel coordinate comprises:
determining a second distance between the starting point pixel coordinate and the starting point prediction pixel coordinate;
determining a third distance between the endpoint pixel coordinate and the endpoint prediction pixel coordinate;
and determining a loss matrix according to the second distance and the third distance.
5. The method of claim 3, further comprising:
determining an included angle between the marker post corresponding to the first pixel coordinate and the marker post corresponding to the predicted pixel coordinate;
then, said determining a loss matrix based on a first distance between said first pixel coordinate and said predicted pixel coordinate comprises:
and determining a loss matrix according to the first distance between the first pixel coordinate and the predicted pixel coordinate and the included angle.
6. The method of claim 3, wherein said obtaining optimized first pixel coordinates from said matched first pixel coordinates and predicted pixel coordinates comprises:
determining the matched first pixel coordinate and the prediction pixel coordinate of which the first distance meets a preset condition;
and obtaining an optimized first pixel coordinate according to the matched first pixel coordinate and the predicted pixel coordinate with the first distance meeting a preset condition.
7. The method according to any one of claims 1 to 6, wherein the obtaining, according to the first detection frame corresponding to the marker bar in the current frame image, the first pixel coordinate corresponding to the first reference point of the marker bar in the current frame image includes:
determining a first detection frame corresponding to a marker post in the current frame image according to semantic information of the current frame image;
extracting line segments of the current frame image according to the first detection frame, and determining a first reference point of a marker post in the current frame image;
and acquiring a first pixel coordinate corresponding to the first reference point.
8. A pixel coordinate acquisition apparatus of a marker post, comprising:
the first coordinate acquisition module is used for acquiring a first pixel coordinate corresponding to a first reference point of a marker post in a current frame image according to a first detection frame corresponding to the marker post in the current frame image;
the second coordinate acquisition module is used for acquiring a second pixel coordinate corresponding to a second reference point of the marker post in the previous frame image according to a second detection frame corresponding to the marker post in the previous frame image;
the predicted coordinate acquisition module is used for acquiring the corresponding predicted pixel coordinate of the marker post in the previous frame image in the current frame image according to the second pixel coordinate;
and the coordinate optimization module is used for acquiring an optimized first pixel coordinate according to the first pixel coordinate and the predicted pixel coordinate.
9. A computer-readable storage medium storing a computer program for executing the pixel coordinate acquisition method of a signpost according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize the pixel coordinate acquisition method of the marker post of any one of the claims 1-7.
CN201910796083.2A 2019-08-27 Pixel coordinate acquisition method and device for marker link Active CN112446229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910796083.2A CN112446229B (en) 2019-08-27 Pixel coordinate acquisition method and device for marker link

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910796083.2A CN112446229B (en) 2019-08-27 Pixel coordinate acquisition method and device for marker link

Publications (2)

Publication Number Publication Date
CN112446229A true CN112446229A (en) 2021-03-05
CN112446229B CN112446229B (en) 2024-07-16

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690333A (en) * 2022-12-30 2023-02-03 思看科技(杭州)股份有限公司 Three-dimensional scanning method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006068299A1 (en) * 2004-12-24 2006-06-29 Casio Computer Co., Ltd. Image processor and image processing program
CN108229359A (en) * 2017-12-26 2018-06-29 大唐软件技术股份有限公司 A kind of face image processing process and device
CN109508575A (en) * 2017-09-14 2019-03-22 深圳超多维科技有限公司 Face tracking method and device, electronic equipment and computer readable storage medium
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109903310A (en) * 2019-01-23 2019-06-18 平安科技(深圳)有限公司 Method for tracking target, device, computer installation and computer storage medium
CN110147702A (en) * 2018-07-13 2019-08-20 腾讯科技(深圳)有限公司 A kind of object detection and recognition method and system of real-time video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006068299A1 (en) * 2004-12-24 2006-06-29 Casio Computer Co., Ltd. Image processor and image processing program
CN109508575A (en) * 2017-09-14 2019-03-22 深圳超多维科技有限公司 Face tracking method and device, electronic equipment and computer readable storage medium
CN108229359A (en) * 2017-12-26 2018-06-29 大唐软件技术股份有限公司 A kind of face image processing process and device
CN110147702A (en) * 2018-07-13 2019-08-20 腾讯科技(深圳)有限公司 A kind of object detection and recognition method and system of real-time video
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109903310A (en) * 2019-01-23 2019-06-18 平安科技(深圳)有限公司 Method for tracking target, device, computer installation and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690333A (en) * 2022-12-30 2023-02-03 思看科技(杭州)股份有限公司 Three-dimensional scanning method and system

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
CN107358149B (en) Human body posture detection method and device
CN106952303B (en) Vehicle distance detection method, device and system
JP7422105B2 (en) Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device
CN110705405A (en) Target labeling method and device
CN111814746A (en) Method, device, equipment and storage medium for identifying lane line
CN113420682A (en) Target detection method and device in vehicle-road cooperation and road side equipment
CN111383246B (en) Scroll detection method, device and equipment
CN110853085A (en) Semantic SLAM-based mapping method and device and electronic equipment
CN112487861A (en) Lane line recognition method and device, computing equipment and computer storage medium
CN116912517B (en) Method and device for detecting camera view field boundary
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
JP2021144741A (en) Information processing system, control method, and program
JP2021089778A (en) Information processing apparatus, information processing method, and program
CN112212873B (en) Construction method and device of high-precision map
CN111027434B (en) Training method and device of pedestrian recognition model and electronic equipment
CN112097742B (en) Pose determination method and device
CN112150529A (en) Method and device for determining depth information of image feature points
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
CN112446229A (en) Method and device for acquiring pixel coordinates of marker post
CN116434156A (en) Target detection method, storage medium, road side equipment and automatic driving system
CN111832347A (en) Method and device for dynamically selecting region of interest
CN112446229B (en) Pixel coordinate acquisition method and device for marker link
CN111626078A (en) Method and device for identifying lane line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant