CN113742440B - Road image data processing method and device, electronic equipment and cloud computing platform - Google Patents

Road image data processing method and device, electronic equipment and cloud computing platform Download PDF

Info

Publication number
CN113742440B
CN113742440B CN202111030322.7A CN202111030322A CN113742440B CN 113742440 B CN113742440 B CN 113742440B CN 202111030322 A CN202111030322 A CN 202111030322A CN 113742440 B CN113742440 B CN 113742440B
Authority
CN
China
Prior art keywords
static element
static
road image
map
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111030322.7A
Other languages
Chinese (zh)
Other versions
CN113742440A (en
Inventor
何雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111030322.7A priority Critical patent/CN113742440B/en
Publication of CN113742440A publication Critical patent/CN113742440A/en
Application granted granted Critical
Publication of CN113742440B publication Critical patent/CN113742440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a data processing method, a data processing device, electronic equipment, a computer storage medium and a cloud computing platform, and relates to the technical fields of computer vision, automatic driving, intelligent transportation and the like. The specific implementation scheme is as follows: detecting a first static element in the acquired road image; determining a corresponding second static element in the map according to the positioning information of the road image; and comparing the first static element with the second static element to obtain a comparison result of processing the road image. The method and the device can help timely update of the map data.

Description

Road image data processing method and device, electronic equipment and cloud computing platform
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the technical fields of computer vision, autopilot, intelligent transportation, and the like.
Background
Nowadays, people travel with higher and higher degree of dependence on tools such as maps and navigation, and the accuracy of information provided by the maps plays an important role in whether users can correctly reach destinations, whether violation of traffic regulations can be avoided, and the like.
However, static elements on the road often change, and if the map data is not updated timely enough, the user cannot adjust travel and driving operations correspondingly and necessarily according to the change of the static elements in the road environment in time.
Disclosure of Invention
The disclosure provides a road image data processing method, a device, electronic equipment, a computer storage medium and a cloud computing platform.
According to an aspect of the present disclosure, there is provided a road image data processing method including:
detecting a first static element in the acquired road image;
determining a second static element corresponding to the first static element in the map according to the positioning information of the road image;
and comparing the first static element with the second static element to obtain a comparison result of the processed road image.
According to another aspect of the present disclosure, there is provided a road image data processing apparatus including:
the detection module is used for detecting a first static element in the acquired road image;
the positioning information module is used for determining a second static element corresponding to the map according to the positioning information of the road image;
and the comparison module is used for comparing the first static element with the second static element to obtain a comparison result of the processed road image.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method in any of the embodiments of the present disclosure.
According to another aspect of the disclosure, a cloud computing platform includes an electronic device provided by any one of the embodiments of the disclosure.
According to the technology disclosed by the invention, the first static element in the actual road image can be acquired, and the comparison result is obtained by comparing the first static element with the second static element at the corresponding position in the map, so that the map can be changed or other related operations can be executed according to the comparison result and with the actual road image condition. The timeliness of updating the map data is maintained.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a road image data processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a road image data processing method according to another embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a road image data processing method according to yet another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a road image data processing method according to an example of the present disclosure;
FIG. 5 is a schematic diagram of a road image data processing method according to another example of the present disclosure;
FIG. 6 is a schematic diagram of a road image data processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic view of a road image data processing apparatus according to another embodiment of the present disclosure;
fig. 8 is a schematic view of a road image data processing apparatus according to still another embodiment of the present disclosure;
fig. 9 is a schematic view of a road image data processing apparatus according to still another embodiment of the present disclosure;
Fig. 10 is a schematic view of a road image data processing apparatus according to still another embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device for implementing a road image data processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The embodiment of the disclosure first provides a road image data processing method, as shown in fig. 1, including:
step S11: detecting a first static element in the acquired road image;
step S12: determining a second static element corresponding to the first static element in the map according to the positioning information of the road image;
step S13: and comparing the first static element with the second static element to obtain a comparison result of the processed road image.
In this embodiment, the road image may be an image in a road video acquired by the road condition acquisition vehicle in the field.
The first static element may be an element whose position is relatively fixed in an actual road environment and which does not change in a short period of time. Such as road guardrails, road segmentation guardrails, road maintenance signs, traffic lights, green belts, street lamps, utility poles, etc.
In one particular implementation, a vehicle that appears in a road image and that is parked on the roadside for a long period of time without ever moving for a long period of time may also be considered a first static element.
In this embodiment, the first static element may also be a static element in which all positions in the road are relatively fixed. Such as roads, buildings, overpasses, and the like.
The first static element may also be any element that appears in the road image and that can be displayed in the map.
Objects that appear in the road image, which may affect traffic smoothness, traffic speed or directionality, and which are stationary with respect to position for a certain time may be considered as the first static elements. Such as lane lines, zebra crossings, etc. drawn by the road surface, may also be considered as the first static element.
In this embodiment, the static element recognition model may be trained to recognize various static elements that may exist in the road.
The appearance style of static elements on roads may be differentiated for different regions. Therefore, in a specific implementation manner, different static element recognition models can be trained for different regions, and static elements in different styles can be recognized.
For different types of static elements, different recognition modes may need to be adopted in order to improve the recognition accuracy. Thus, in a specific implementation, different static element recognition models may be trained for different types of static elements to recognize the different types of static elements.
The positioning information of the road image can be the positioning information of the acquisition equipment during the acquisition of the source data of the road image.
The corresponding second static element in the map may be a static element whose position is relatively fixed for a longer time, which corresponds to a road element in the map, and may specifically be a static element related to traffic in the high-precision map, on a road or on both sides of the road.
In this embodiment, the second static element corresponding to the map is determined according to the positioning information of the road image, which may be a static element corresponding to the first static element in the road image and appearing in the map in the range of the positioning information where the acquisition device is located when the road image is acquired, and the second static element.
For example, if a long-term stationary vehicle appears in the road image and a vehicle icon is also present at the same location in the high-precision map, it may be determined that the vehicle icon represents a second static element corresponding to the long-term stationary vehicle in the road image; when the traffic rule indication board appears in the road image and the same traffic rule indication board icon exists at the same position of the high-precision map, the traffic rule indication board icon can be determined to represent a second static element corresponding to the traffic rule indication board in the road image.
Determining a second static element corresponding to the map according to the positioning information of the road image, and taking the empty information or the information indicating that the second static element does not exist as the second static element when the element corresponding to the first static element does not exist in the map; it may also include taking other elements (roads, etc.) of the corresponding location as the second static element in the case where there is no element corresponding to the first static element in the map. For example, a building exists in the road image, and the building is determined to be a first static element; but in the map there is no building, the traffic light at the location can be taken as the second static element.
Determining a second static element corresponding to the map according to the positioning information of the road image may further include taking more than two static elements corresponding to the map within the positioning information range of the first static element as the second static element. For example, if a section of road exists in the road image, the positioning information of the road is road coverage, and in the map in the road coverage, a plurality of static elements such as a sign, a sidewalk, a green belt, a overpass and the like exist, then the static elements can be all used as the second static elements corresponding to the first static elements.
In particular embodiments, the first static element and the corresponding second static element may not be exactly identical.
For example, if there is a traffic light in the road image as the first static element and another traffic light icon having a different shape is present at the same position of the map, the traffic light having a different shape can be determined as the second static element according to the set position.
And comparing the first static element with the second static element to obtain a comparison result of the processed road image. It may be specifically determined whether the first static element and the second static element are consistent, i.e. whether the first static element has the second static element corresponding to it completely in the map.
In this embodiment, the first static element in the actual road image can be acquired, and compared with the second static element at the corresponding position in the map, so as to obtain a comparison result, so that the map can be modified or other related operations can be executed according to the comparison result in combination with the actual road image condition. The timeliness of updating the map data is maintained.
In one embodiment, the first static element includes a regular-shaped first static element and an irregular-shaped first static element; detecting a first static element in the acquired road image, including:
determining a first static element with a regular shape in the road image by adopting a target detection network;
a first static element with irregular shape is determined in the road image by using a semantic segmentation network.
In this embodiment, the first static element with a regular shape may be a static element with a regular geometric shape, such as a geometric shape of a rectangle, a circle, a triangle, a round rectangle, a round triangle, an ellipse, a sphere, a partial sphere, a cylinder, a triangular pyramid, a cone, a cuboid, a cube, or a geometric shape of a plane, a solid, or a geometric shape of a combination of a plane and a solid.
In this embodiment, the target detection network may be any neural network or neural network model having a target detection function.
For example, the target detection network may be an end-to-end neural network, and the detection of the target object may be implemented by selecting a frame where the target object is located. The object detection network in the embodiments of the present disclosure may specifically be, for example, a network such as YOLO (You only look once, you see once), SSD (Single Shot Multi Box Detector, single-excitation multi-box detection), or the like. The target detection network may also be a neural network based on regional nomination, specifically, for example, R-CNN (Region Convolutional Neural Network, regional convolutional neural network), SPP-net (Spatial Pyramid Pooling Network, spiral pyramid pooling network), fast R-CNN (Fast R-CNN), fast R-CNN, R-FCN, and the like.
The first static element having an irregular shape may be an irregular planar geometry, an irregular solid geometry, or an irregular combination of an irregular planar geometry and a solid geometry.
In this embodiment, the semantic segmentation network may be a network capable of realizing the semantic segmentation function of the image. Semantic segmentation of an image is to assign each pixel in the input image a semantic class to get a pixelated dense classification. The general semantic segmentation architecture may be considered as an encoder-decoder network. The encoder is typically a pre-trained classification network such as VGG (Visual Geometry Group Network, visual geometry network), res Net (Residual network), incorporating a decoder network on the basis of the encoder.
In the embodiment of the disclosure, the first static element of the shape rule is detected by using the target detection network, so that the detection speed can be improved. The first static element with the irregular shape is detected by utilizing the semantic segmentation network, so that the detection accuracy can be improved.
In one implementation, in a case where the road image includes a frame image in the road video, comparing the first static element and the second static element to obtain a comparison result, as shown in fig. 2, includes:
step S21: for at least two frame images, calculating the intersection ratio of a first static element in one frame image and a corresponding first static element in other frame images;
step S22: according to the cross-over ratio, checking the association relation between the corresponding first static elements in different frame images;
step S23: determining a first static element after verification according to the association relation;
step S24: respectively determining the attribute information of the first static element after verification and the attribute information of the second static element;
step S25: and obtaining a comparison result by utilizing the difference between the attribute information of the first static element and the attribute information of the second static element.
The intersection ratio of the first static element in one frame image and the corresponding first static element in other frame images can be the intersection ratio of the same static element in two adjacent frame images; the merging ratio of the same static element in the front and back frame images with a short interval time can also be used.
In this embodiment, according to the cross-correlation ratio, the association relationship between the corresponding first static elements in different frame images is checked, so as to determine whether the first static elements are static or not, or whether the first static elements are objects detected by the preprocessing network error. For example, in one case, a dynamic object is erroneously detected as a static object, and by checking the association relationship, it is determined that the same object has a small cross-over ratio in different image frames, and then the object can be redetermined as a non-first static object. For another example, in another case, the first static object that is not present is erroneously detected in the road image, and it is determined that the object that is not present in the other frame image is not present by the correlation verification with the other frame image, so that it is determined again that the object is not detected.
In this embodiment, it is possible to verify whether the first static object identification is correct or not by target tracking.
In one embodiment, the comparing result is obtained by using the difference between the attribute information of the first static element and the attribute information of the second static element, including:
and under the condition that the difference is higher than a first threshold value, generating inconsistent comparison results according to the checked first static element and the checked second static element.
In this embodiment, the first threshold may be set to 0, i.e. in case the properties are not exactly the same, i.e. the first static element and the second static element are considered to be inconsistent.
In this embodiment, the consistency of the first static element and the corresponding second static element is determined according to the attribute information, so that a part of static elements easy to compare can be removed at a relatively high speed, and the judging speed is improved.
In one embodiment, the comparison result is obtained by using the difference between the attribute information of the first static element and the attribute information of the second static element, as shown in fig. 3, and includes:
step S31: under the condition that the difference is not higher than a first threshold value, acquiring a first coordinate corresponding to the checked first static element in the map;
step S32: calculating the cross ratio of the first static element and the second static element after verification according to the first coordinate and the second coordinate of the second static element;
step S33: and under the condition that the intersection ratio is smaller than a second threshold value, generating a comparison result according to the checked first static element and the checked second static element.
In this embodiment, the first threshold may be 0, and if the difference is not higher than 0, there is no difference in attribute between the first static element and the corresponding second static element, in which case, if the intersection ratio is relatively large, it is indicated that the two are likely to be the same static objects in the display road environment. If the first static object and the second static object have identical properties, but are less intersected, it is indicated that the second static object may have changed in the corresponding static object in the road environment, such as a change in shape, position, etc.
In one implementation, the second threshold may be 100%, i.e., the first static element and the corresponding second static element may be determined to be inconsistent if not completely coincident.
In this embodiment, by further judging the first static element and the second static element, the condition that the first static element is inconsistent with the corresponding second static element can be screened out through a finer condition.
In one embodiment, obtaining the first coordinates of the verified first static element in the map includes:
determining the world coordinates of the first static element after verification according to the position of the principal point of the first static element after verification in the image, the focal length of the camera of the image, and the rotation matrix and the translation matrix of the camera of the image in the world coordinate system;
and determining the first coordinate of the verified first static element in the map according to the world coordinate.
In this embodiment, the position of the principal point may be the intersection point of the optical axis and the image plane.
In other embodiments, the camera may be other image or video capturing devices, such as a video recorder or the like.
In this embodiment, the information of the first static element in the map is converted into the world coordinates of the first static element in the real world, so that the comparison between the first static element and the corresponding second static element can be facilitated.
In one implementation, the road image data processing method further includes:
and changing the map according to the comparison result.
In this embodiment, changing the map according to the comparison result may mainly include correcting the map according to the comparison result, so that the data displayed in the map is consistent with the road image.
In this embodiment, the map is changed according to the comparison result, so that the map data can be updated according to the road image shot at any time, and the road image can be a picture shot by the automobile data recorder or a picture shot by a special acquisition vehicle. The map is changed through the road image, so that the data in the map can be updated in time under the condition that a static object in a real road environment changes.
In one embodiment, changing the map based on the comparison results includes:
deleting a second static element which does not exist in the road image and corresponds to the first static element from the map;
and adding the first static element in the road image without the corresponding second static element to the map.
In one implementation, assuming that there is no second static element B in the map that corresponds completely to the first static element a in the road image, the first static element a in the road image may be considered to be the most current, and the first static element a may be added to the map.
In one implementation, assuming that there is no first static element D in the road image that corresponds exactly to a second static element C in the map, it may be determined that the second static element C has been deleted in the real road environment, which may be deleted in the map.
In this embodiment, static elements in the map can be added or deleted according to the comparison result, so that the map data can be updated in time according to the collected road image, and the accuracy of the map service provided for the user can be provided.
The embodiment of the disclosure can also be applied to the high-precision map, and the cloud more map technical scheme supporting all elements of the high-precision map comprises updating static information such as traffic lights, lane lines, roads and the like in the high-precision map. Not only can all static elements in the static layer of the high-precision map be compared and changed as required, but also the static elements in the dynamic layer of the high-precision map can be compared and changed. The embodiment of the disclosure can also be applied to L4 (Level 4, 4) high-precision map automation updating projects. L4 may correspond to a highly automated autonomous vehicle. This level of autonomous vehicles, in a particular driving mode, is responsible for the system performing all the dynamic driving tasks of the vehicle, even if the driver fails to respond to the intervention requests made by the system when a special situation occurs.
In one example of the present disclosure, a road image data processing method includes the steps as shown in fig. 4:
step S41: for road images, a target detection network and a semantic segmentation network are adopted to respectively identify label elements such as traffic lights and road surface elements such as lane lines.
In this example, the signage elements such as traffic lights (the elements may be equivalent to elements in other embodiments of the disclosure), the geometric comparison criteria, and the target detection network may be used for identification; road surface elements such as lane lines are difficult to be rectified, and can be identified through a semantic segmentation network. When the semantic segmentation network is trained, the road image can be used as a mask (mask) for learning, and meanwhile, the semantic segmentation network can be trained to recognize the color, the virtual and the real of the road surface elements and other information.
Step S42: and carrying out target tracking on the identified elements on the road.
Specifically, tracking of the elements may be realized based on a tracking-by-detection method. The tracking essence can be that the same element association is carried out among the frame images with different time sequences, and the element association among the frame images with different time sequences is obtained by calculating parameters such as the cross-over ratio and the like.
Step S43: and projecting the elements in the high-precision map onto the image, determining the elements of the high-precision map corresponding to the elements in the image, and matching the structural information according to the projection result.
Step S44: and performing cross comparison according to the projection result and the recognition result, and outputting elements for transmitting change in reality according to the cross comparison result.
In the example, the global comparison of the elements in the map can be realized for the elements in the road image, and the accuracy of change identification is improved.
The cloud end map updating method and the cloud end map updating device can support cloud end more diagrams of high-precision map full elements, quickly and efficiently discover map full element changes, and update the day level by utilizing operation data. The unmanned taking over times are reduced, the light weight of the map data updating process is realized, crowdsourcing is supported, and powerful guarantee is provided for unmanned landing.
In one example of the disclosure, as shown in fig. 5, the road image data processing method can process data collected by a road material collection vehicle, a vehicle data recorder and a common camera, and specifically performs the following steps:
step S51: road images acquired by a Camera (Camera) are acquired. The road image includes a plurality of first static elements.
Step S52: high-precision maps (HDMap, high Definition Map) were obtained. The high-precision map includes a plurality of second static elements.
Step S53: a Detection is performed on a rule element such as a Traffic light (Traffic light) in a road image.
Step S54: irregular elements such as road Semantic elements (Lane Semantic) in a road image are segmented (segment).
Step S55: structure matching is performed based on the high-precision map (Structure Matching). Elements in the high-definition map may be projected to the road image.
Step S56: and carrying out target tracking on a plurality of frame images corresponding to the road image. Target tracking may be specifically performed based on the Tracking by Detection method.
Step S57: and performing Cross comparison according to the matching result and the tracking result to obtain a Cross Difference (Cross Difference). Further, it is possible to obtain the difference 57 of the regular elements such as traffic lights and the difference 58 of the irregular elements such as lane lines.
The embodiment of the disclosure also provides a road image data processing device, as shown in fig. 6, including:
a detection module 61 for detecting a first static element in the acquired road image;
a positioning information module 62, configured to determine a second static element corresponding to the first static element in the map according to positioning information of the road image;
the comparison module 63 is configured to compare the first static element with the second static element to obtain a comparison result of the processed road image.
In one embodiment, the first static element includes a regular-shaped first static element and an irregular-shaped first static element; as shown in fig. 7, the detection module includes:
a first unit 71 for determining a first static element of regular shape in the road image using the target detection network;
a second unit 72 for determining a first static element of irregular shape in the road image using the semantic segmentation network.
In one embodiment, in the case where the road image includes a frame image in a road video, as shown in fig. 8, the comparison module includes:
an intersection ratio unit 81, configured to calculate, for at least two frame images, an intersection ratio of a first static element in one frame image and a corresponding first static element in another frame image;
an association unit 82, configured to verify association relationships between corresponding first static elements in different frame images according to the cross-over ratio;
a determining unit 83, configured to determine the first static element after verification according to the association relationship;
an attribute unit 84, configured to determine attribute information of the first static element after verification and attribute information of the second static element respectively;
and a result unit 85, configured to obtain a comparison result by using the difference between the attribute information of the first static element and the attribute information of the second static element.
In one embodiment, the result unit is further for:
and under the condition that the difference is higher than a first threshold value, generating inconsistent comparison results according to the checked first static element and the checked second static element.
In one embodiment, the result unit is further for:
under the condition that the difference is not higher than a first threshold value, acquiring a first coordinate corresponding to the checked first static element in the map;
calculating the cross ratio of the first static element and the second static element after verification according to the first coordinate and the second coordinate of the second static element;
and under the condition that the intersection ratio is smaller than a second threshold value, generating a comparison result according to the checked first static element and the checked second static element.
In one embodiment, the result unit is further for:
determining the world coordinates of the first static element after verification according to the position of the principal point of the first static element after verification in the image, the focal length of the camera of the image, and the rotation matrix and the translation matrix of the camera of the image in the world coordinate system;
and determining the first coordinate of the verified first static element in the map according to the world coordinate.
In one embodiment, as shown in fig. 9, the road image data processing apparatus further includes:
And a changing module 91, configured to change the map according to the comparison result.
In one embodiment, as shown in FIG. 10, the modification module includes:
a deleting unit 101, configured to delete, from the map, a second static element in which a corresponding first static element does not exist in the road image;
an adding unit 102, configured to add a first static element in the road image, where the corresponding second static element does not exist, to the map.
The functions of each unit, module or sub-module in each apparatus of the embodiments of the present disclosure may be referred to the corresponding descriptions in the above method embodiments, which are not repeated herein.
The embodiment of the disclosure also provides a cloud computing platform, which comprises the electronic device provided by any one of the embodiments of the disclosure.
In one embodiment, the cloud computing platform performs processing at the cloud, including image video processing and data computation, map updating, etc., and may also be referred to as a central system, a central server, a cloud control platform, a map server, a map platform, etc.
Specifically, the electronic device provided by the embodiment of the disclosure may include:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
The embodiment of the disclosure can be applied to the technical fields of computer vision, automatic driving, intelligent transportation and the like.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
FIG. 11 illustrates a schematic block diagram of an example electronic device 110 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 110 includes a computing unit 111 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 112 or a computer program loaded from a storage unit 118 into a Random Access Memory (RAM) 113. In the RAM 113, various programs and data required for the operation of the electronic device 110 may also be stored. The computing unit 111, the ROM 112, and the RAM 113 are connected to each other through a bus 114. An input output (I/O) interface 115 is also connected to bus 114.
Various components in the electronic device 110 are connected to the I/O interface 115, including: an input unit 116 such as a keyboard, a mouse, etc.; an output unit 117 such as various types of displays, speakers, and the like; a storage unit 118 such as a magnetic disk, an optical disk, or the like; and a communication unit 119 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 119 allows the electronic device 110 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 111 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 111 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 111 performs the respective methods and processes described above, for example, a road image data processing method. For example, in some embodiments, the road image data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 118. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 110 via the ROM 112 and/or the communication unit 119. When the computer program is loaded into the RAM 113 and executed by the computing unit 111, one or more steps of the road image data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 111 may be configured to perform the road image data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A road image data processing method, comprising:
detecting a first static element in the acquired road image;
determining a second static element corresponding to the first static element in a map according to the positioning information of the road image;
comparing the first static element with the second static element to obtain a comparison result of processing the road image;
and comparing the first static element with the second static element to obtain a comparison result under the condition that the road image comprises a frame image in a road video, wherein the comparison result comprises the following steps:
for at least two frame images, calculating the cross ratio of a first static element in one frame image and a corresponding first static element in other frame images;
according to the cross ratio, verifying the association relation between the corresponding first static elements in different frame images;
Determining a first static element after verification according to the association relation;
respectively determining attribute information of the first static element after verification and attribute information of the second static element;
and under the condition that the difference between the attribute information of the first static element and the attribute information of the second static element is higher than a first threshold value, generating the comparison result according to the checked first static element and second static element.
2. The method of claim 1, wherein the first static element comprises a regularly shaped first static element and an irregularly shaped first static element; the detecting the first static element in the acquired road image comprises the following steps:
determining a first static element with a regular shape in the road image by adopting a target detection network;
and determining a first static element with irregular shape in the road image by adopting a semantic segmentation network.
3. The method of claim 1, wherein obtaining a comparison result using a difference between the attribute information of the first static element and the attribute information of the second static element, comprises:
acquiring a first coordinate corresponding to the checked first static element in the map under the condition that the difference is not higher than a first threshold value;
Calculating the cross ratio of the first static element and the second static element after verification according to the first coordinate and the second coordinate of the second static element;
and under the condition that the intersection ratio is smaller than a second threshold value, generating the comparison result according to the checked first static element and second static element.
4. A method according to claim 3, wherein said obtaining the first coordinates of the verified first static element in the map comprises:
determining world coordinates of the verified first static element according to the position of a principal point of the verified first static element in the image, the focal length of the camera of the image, and a rotation matrix and a translation matrix of the camera of the image in a world coordinate system;
and determining the first coordinate of the verified first static element in the map according to the world coordinate.
5. The method of any of claims 1-4, further comprising:
and changing the map according to the comparison result.
6. The method of claim 4, wherein said modifying the map based on the comparison result comprises:
Deleting a second static element, in which no corresponding first static element exists in the road image, from the map;
and adding the first static element in the road image without the corresponding second static element to the map.
7. A road image data processing apparatus comprising:
the detection module is used for detecting a first static element in the acquired road image;
the positioning information module is used for determining a second static element corresponding to the map according to the positioning information of the road image;
the comparison module is used for comparing the first static element with the second static element to obtain a comparison result of processing the road image;
wherein, in the case that the road image includes a frame image in a road video, the comparison module includes:
the cross-over ratio unit is used for calculating the cross-over ratio of the first static element in one frame image and the corresponding first static element in other frame images aiming at least two frame images;
the association unit is used for checking association relations between corresponding first static elements in different frame images according to the cross-over ratio;
the determining unit is used for determining the first static element after verification according to the association relation;
The attribute unit is used for respectively determining the attribute information of the first static element after verification and the attribute information of the second static element;
the result unit is used for obtaining a comparison result by utilizing the difference between the attribute information of the first static element and the attribute information of the second static element;
wherein the result unit is further configured to:
and under the condition that the difference is higher than a first threshold value, generating a comparison result according to the checked first static element and the checked second static element.
8. The apparatus of claim 7, wherein the first static element comprises a regularly shaped first static element and an irregularly shaped first static element; the detection module comprises:
a first unit for determining a first static element of regular shape in the road image using an object detection network;
and the second unit is used for determining a first static element with irregular shape in the road image by adopting a semantic segmentation network.
9. The apparatus of claim 7, wherein the results unit is further to:
acquiring a first coordinate corresponding to the checked first static element in the map under the condition that the difference is not higher than a first threshold value;
Calculating the cross ratio of the first static element and the second static element after verification according to the first coordinate and the second coordinate of the second static element;
and under the condition that the intersection ratio is smaller than a second threshold value, generating the comparison result according to the checked first static element and second static element.
10. The apparatus of claim 7, wherein the results unit is further to:
determining world coordinates of the verified first static element according to the position of a principal point of the verified first static element in the image, the focal length of the camera of the image, and a rotation matrix and a translation matrix of the camera of the image in a world coordinate system;
and determining the first coordinate of the verified first static element in the map according to the world coordinate.
11. The apparatus of any of claims 7-10, further comprising:
and the changing module is used for changing the map according to the comparison result.
12. The apparatus of claim 11, wherein the modification module comprises:
a deleting unit configured to delete, from the map, a second static element in which a corresponding first static element does not exist in the road image;
And the adding unit is used for adding the first static element in the road image without the corresponding second static element to the map.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-6.
15. A cloud computing platform comprising the electronic device of claim 13.
CN202111030322.7A 2021-09-03 2021-09-03 Road image data processing method and device, electronic equipment and cloud computing platform Active CN113742440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111030322.7A CN113742440B (en) 2021-09-03 2021-09-03 Road image data processing method and device, electronic equipment and cloud computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111030322.7A CN113742440B (en) 2021-09-03 2021-09-03 Road image data processing method and device, electronic equipment and cloud computing platform

Publications (2)

Publication Number Publication Date
CN113742440A CN113742440A (en) 2021-12-03
CN113742440B true CN113742440B (en) 2023-09-26

Family

ID=78735185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111030322.7A Active CN113742440B (en) 2021-09-03 2021-09-03 Road image data processing method and device, electronic equipment and cloud computing platform

Country Status (1)

Country Link
CN (1) CN113742440B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115114312B (en) * 2022-07-15 2023-06-27 北京百度网讯科技有限公司 Map data updating method and device and electronic equipment
CN115294552A (en) * 2022-08-08 2022-11-04 腾讯科技(深圳)有限公司 Rod-shaped object identification method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324616A (en) * 2020-02-07 2020-06-23 北京百度网讯科技有限公司 Method, device and equipment for detecting lane line change information
WO2020139355A1 (en) * 2018-12-27 2020-07-02 Didi Research America, Llc System for automated lane marking
CN112132853A (en) * 2020-11-30 2020-12-25 湖北亿咖通科技有限公司 Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN112183440A (en) * 2020-10-13 2021-01-05 北京百度网讯科技有限公司 Road information processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020139355A1 (en) * 2018-12-27 2020-07-02 Didi Research America, Llc System for automated lane marking
CN111324616A (en) * 2020-02-07 2020-06-23 北京百度网讯科技有限公司 Method, device and equipment for detecting lane line change information
CN112183440A (en) * 2020-10-13 2021-01-05 北京百度网讯科技有限公司 Road information processing method and device, electronic equipment and storage medium
CN112132853A (en) * 2020-11-30 2020-12-25 湖北亿咖通科技有限公司 Method and device for constructing ground guide arrow, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
阮航 ; 王立春 ; .基于特征图的车辆检测和分类.计算机技术与发展.2018,(11),全文. *

Also Published As

Publication number Publication date
CN113742440A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
KR102447352B1 (en) Method and device for traffic light detection and intelligent driving, vehicle, and electronic device
CN113989450B (en) Image processing method, device, electronic equipment and medium
US20200082561A1 (en) Mapping objects detected in images to geographic positions
CN112668460A (en) Target detection method, electronic equipment, road side equipment and cloud control platform
US9104919B2 (en) Multi-cue object association
CN113742440B (en) Road image data processing method and device, electronic equipment and cloud computing platform
CN113674287A (en) High-precision map drawing method, device, equipment and storage medium
CN111787489B (en) Method, device and equipment for determining position of practical interest point and readable storage medium
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN110675635A (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN112668428A (en) Vehicle lane change detection method, roadside device, cloud control platform and program product
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113989777A (en) Method, device and equipment for identifying speed limit sign and lane position of high-precision map
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN113011298B (en) Truncated object sample generation, target detection method, road side equipment and cloud control platform
CN111950345A (en) Camera identification method and device, electronic equipment and storage medium
CN116659524A (en) Vehicle positioning method, device, equipment and storage medium
CN114445312A (en) Map data fusion method and device, electronic equipment and storage medium
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN114724113B (en) Road sign recognition method, automatic driving method, device and equipment
CN113450794B (en) Navigation broadcasting detection method and device, electronic equipment and medium
CN113514053B (en) Method and device for generating sample image pair and method for updating high-precision map
CN112784175B (en) Method, device, equipment and storage medium for processing interest point data
CN116245730A (en) Image stitching method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant