CN116363400A - Vehicle matching method and device, electronic equipment and storage medium - Google Patents

Vehicle matching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116363400A
CN116363400A CN202310343923.6A CN202310343923A CN116363400A CN 116363400 A CN116363400 A CN 116363400A CN 202310343923 A CN202310343923 A CN 202310343923A CN 116363400 A CN116363400 A CN 116363400A
Authority
CN
China
Prior art keywords
vehicle
image
matched
vehicle image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310343923.6A
Other languages
Chinese (zh)
Inventor
段由
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202310343923.6A priority Critical patent/CN116363400A/en
Publication of CN116363400A publication Critical patent/CN116363400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a vehicle matching method, a device, electronic equipment and a storage medium, relates to the technical field of intelligent parking, and particularly relates to the technical fields of image processing, deep learning and the like. The specific implementation scheme is as follows: acquiring a reference vehicle image and a vehicle image to be matched; performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result; performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result; and determining the matching degree of the vehicle according to the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed. The method and the device can reduce the influence of the interference object on the vehicle matching and improve the accuracy of the vehicle matching.

Description

Vehicle matching method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of intelligent parking, in particular to the technical fields of image processing, deep learning and the like, and specifically relates to a vehicle matching method, a device, electronic equipment, a storage medium and a computer program product.
Background
In high-level video parking scenarios, the surveillance cameras are typically mounted on a street light pole or a dedicated pole, and the surveillance cameras are mounted at a height of about 3 to 6 meters from the ground; the monitoring camera recognizes the entering and exiting of the vehicles in the parking space, calculates the cost according to the parking time, and achieves the effect of charging, managing and controlling the roadside vehicles through a charging system consisting of charging terminal software, a cloud platform and a payment platform.
Disclosure of Invention
The present disclosure provides a vehicle matching method, apparatus, electronic device, storage medium, and computer program product.
According to an aspect of the present disclosure, there is provided a vehicle matching method including:
acquiring a reference vehicle image and a vehicle image to be matched;
performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result;
performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result;
and determining the matching degree of the vehicle according to the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed.
According to another aspect of the present disclosure, there is provided a vehicle matching apparatus including:
the image acquisition module is used for acquiring a reference vehicle image and a vehicle image to be matched;
the first segmentation module is used for carrying out instance segmentation on the reference vehicle image and removing interference objects in the reference vehicle image according to an instance segmentation result;
the second segmentation module is used for carrying out instance segmentation on the vehicle image to be matched and removing interference objects in the vehicle image to be matched according to an instance segmentation result;
and the matching module is used for determining the matching degree of the vehicle according to the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle matching method of any embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the vehicle matching method according to any embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the vehicle matching method of any embodiment of the present disclosure.
According to the technology disclosed by the invention, the image segmentation technology is utilized to remove the interference object in the vehicle image, so that the influence of the interference object on the vehicle matching is reduced, and the accuracy of the vehicle matching is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a vehicle matching method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart diagram of another vehicle matching method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart diagram of another vehicle matching method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a vehicle matching device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device used to implement a vehicle matching method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow chart of a vehicle matching method according to an embodiment of the present disclosure. The embodiment can be applied to a high-order video parking scene. The method may be performed by a vehicle matching device implemented in software and/or hardware, preferably configured in an electronic device, such as a computer device or server, etc. As shown in fig. 1, the method specifically includes the following steps:
s101, acquiring a reference vehicle image and a vehicle image to be matched.
In high-level video parking scenarios, the overall process of a vehicle from entering a parking space to exiting the parking space needs to be tracked by a pre-installed image acquisition device. Sometimes, the following of the vehicle occurs due to factors such as shielding of obstacles (e.g., other vehicles, pedestrians, etc.) or movement of the image capturing device. In such use cases, for example, vehicle matching may be performed in accordance with the steps of the present disclosure S101-S104 to retrieve a lost vehicle. For ease of understanding, the matching method is described below in connection with such use scenarios, which should not be construed as limiting the method.
In this embodiment, the vehicle image to be matched and the reference vehicle image are acquired by an image acquisition device in the high-level video parking lot. For example, if it is determined that the target vehicle on the target parking space is lost, images for the target parking space and the target vehicle need to be acquired as reference vehicle images. Optionally, first determining a first time when the target vehicle is driven into the target parking space and a second time when the target vehicle is lost; acquiring at least one image for a target vehicle acquired by an image acquisition device between a first time and a second time; from which the best quality image (e.g. the image without occlusion or background interference) or the last acquired image of the target vehicle is selected as the reference vehicle image. Similarly, if the target vehicle is lost after entering the parking space or leaving the parking space, the image acquired before the lost vehicle can be used as the reference vehicle image. The vehicle image to be matched is optionally the first vehicle image acquired by the image acquisition device after the target vehicle is lost, wherein the image to be matched can comprise an obstacle or background interference for shielding the target vehicle, and the like.
After the reference vehicle image and the vehicle image to be matched are acquired, the accuracy of vehicle matching is seriously affected in consideration of the fact that obstacles, background interference and the like may exist in the reference vehicle image and the vehicle image to be matched. Therefore, an image segmentation technique is introduced to remove interfering objects (e.g., obstacles or backgrounds) that may be present in the reference vehicle image and the vehicle image to be matched. The specific process can refer to the steps S102 and S103, wherein the execution sequence of the steps S102 and S103 is not sequential, and can be performed concurrently. It should be noted that, the image segmentation technology is creatively introduced into the scene of vehicle matching in the scheme of the present disclosure, so that interference of obstacles or backgrounds on vehicle matching can be eliminated, accuracy of vehicle matching is improved, and then a lost vehicle is found out rapidly, so that accuracy of parking charging is ensured.
S102, performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result.
In an alternative embodiment, the reference vehicle image is preprocessed, wherein the preprocessing includes resizing pixels of the reference vehicle image; for example, the pixel size of the reference vehicle image is adjusted to 224 x 224; the pixel size may be determined by the employed example segmentation algorithm, and different example segmentation algorithms may be employed to adjust the reference vehicle image to the corresponding pixel size. And then carrying out instance segmentation based on a preset instance segmentation algorithm. Alternatively, the preset instance segmentation algorithm may be a first detection re-segmentation algorithm, such as a mask-cnn algorithm; the preset instance segmentation algorithm may also be a method that performs detection and segmentation simultaneously as parallel tasks, such as yolact algorithm. Dividing a reference vehicle image through a preset example dividing algorithm to obtain an example dividing result; the instance segmentation results comprise instance objects and segmentation attributes of the instance objects, wherein the segmentation attributes can comprise types (such as barriers of vehicles, pedestrians and the like), target frames and polygonal parameters polygon to which the instance objects belong. The target vehicle is determined from the example division result, and all example objects except the target vehicle and the background of the reference vehicle image are taken as interference objects and removed, and optionally, gray processing is performed on the region except the target vehicle in the reference vehicle image, for example, the RGB values of the pixel points of the region except the target vehicle are adjusted to 127.5. Therefore, the interference objects in the reference vehicle image are removed, and the influence of the subsequent interference objects on the vehicle matching is avoided.
S103, performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result.
In an alternative embodiment, the vehicle image to be matched is preprocessed, wherein the preprocessing process includes adjusting the pixel size of the vehicle image to be matched; for example, the pixel size of the vehicle image to be matched is adjusted to 224 by 224; it should be noted that the pixel size may be determined by the employed example segmentation algorithm, and different example segmentation algorithms may be employed to adjust the vehicle image to be matched to the corresponding pixel size. And then carrying out instance segmentation based on a preset instance segmentation algorithm. Alternatively, the preset instance segmentation algorithm may be an algorithm for first target detection and then segmentation, such as a mask-cnn algorithm; the preset instance segmentation algorithm may also be a method of performing object detection and segmentation simultaneously as parallel tasks, such as yolact algorithm. Dividing the vehicle image to be matched through a preset example dividing algorithm to obtain an example dividing result; the instance segmentation results comprise instance objects and segmentation attributes of the instance objects, wherein the segmentation attributes can comprise types (such as barriers of vehicles, pedestrians and the like), target frames and polygonal parameters polygon to which the instance objects belong. According to the example segmentation result, determining the vehicle object to be matched, taking all example objects except the vehicle object to be matched and the background of the vehicle image to be matched as interference objects and removing, optionally, carrying out gray processing on the region except the vehicle object to be matched in the vehicle image to be matched, for example, adjusting the RGB value of the pixel point of the region except the vehicle object to be matched to 127.5. Therefore, the interference objects in the vehicle image to be matched are removed, and the influence of the subsequent interference objects on the vehicle matching is avoided.
S104, determining the vehicle matching degree according to the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
Alternatively, the vehicle matching degree may be determined by a deep learning method based on the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed. In an alternative embodiment, a matching algorithm network (for example, a network of res 34) may be predetermined, and the reference vehicle image with the interference object removed is input into the matching algorithm network to perform feature extraction, so as to obtain a corresponding image feature vector (for example, a vector which may be 128 dimensions); inputting the vehicle image to be matched with the interference object removed into a matching algorithm network for feature extraction, and obtaining a corresponding image feature vector (for example, a vector which can be 128 dimensions); further, calculating the distance between the two image feature vectors, for example, calculating the Euclidean distance or the Mahalanobis distance between the two image feature vectors; and taking the obtained distance value as the matching degree of the vehicle. And then according to the vehicle matching degree, whether the vehicle object of the vehicle image to be matched is a lost target vehicle or not can be determined.
In the embodiment, the image segmentation technology is creatively introduced into the scene of vehicle matching, and the interference objects in the images can be eliminated based on the image segmentation technology, so that the accuracy of vehicle matching is ensured; and then according to the vehicle matching result, the following vehicle can be found out rapidly, and the situation of parking charging errors in the follow-up process is avoided.
Fig. 2 is a flow chart of another vehicle matching method according to an embodiment of the present disclosure, where the process of determining the degree of matching of a vehicle is further optimized based on the above embodiment. As shown in fig. 2, the method specifically includes the following steps:
s201, acquiring a reference vehicle image and a vehicle image to be matched.
S202, performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result.
S203, performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result.
The specific implementation process of steps S201 to S203 may be referred to the description of the above embodiment, and will not be repeated here.
In the present embodiment, the process of determining the degree of matching of the vehicle according to the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed may be referred to as S204-S205.
S204, determining a vehicle intersection area between the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
In this embodiment, in order to ensure that the calculated vehicle matches more accurately, an idea of calculating the matching degree by using the features of the same position of the vehicle is proposed. Thus, after the reference vehicle image from which the disturbance object is removed and the vehicle image to be matched from which the disturbance object is removed are obtained, the vehicle intersection region of the two images can be determined. Optionally, the contour of the vehicle in the reference vehicle image from which the interference object is removed and the contour of the vehicle in the image to be matched from which the interference object is removed are intersected, and a vehicle intersection area can be obtained. Further, removing the parts of the reference vehicle image and the vehicle image to be matched, which are positioned outside the vehicle intersection area; for example, gray processing is performed on a partial image, which is located outside the vehicle intersection area, in the reference vehicle image and the vehicle image to be matched, for example, RGB values of pixels located outside the vehicle intersection area in the two images are adjusted to 127.5, so that a vehicle area image, in which each image is located inside the vehicle intersection area, can be obtained. Further, the calculation of the degree of matching of the vehicle may be performed in accordance with the step of S205. In another alternative embodiment, since the sizes of the pixels of the reference vehicle image from which the interference object is removed and the pixels of the vehicle image to be matched from which the interference object is removed are the same, whether the pixels with the RGB value of 127.5 exist in the two pixels at the same position in the two images can be determined by comparing the pixels, if so, the RGB values of the two pixels are adjusted to 127.5, so that the vehicle image finally remained in the two images is a vehicle region image, and the occupied region is a vehicle intersection region.
S205, determining the vehicle matching degree according to the vehicle area images in the vehicle intersection areas in the reference vehicle image and the vehicle image to be matched.
Optionally, determining a first image feature vector according to a vehicle region image in the vehicle intersection region in the reference vehicle image; for example, a vehicle region image within the vehicle intersection region in the reference vehicle image is input into a preset matching algorithm network (for example, a network of the resnet 34), and a first image feature vector is determined according to the output of the matching algorithm network. Determining a second image feature vector according to a vehicle region image in a vehicle intersection region in the vehicle images to be matched; for example, inputting a vehicle region image which is in a vehicle intersection region in a vehicle image to be matched into a preset matching algorithm network (for example, a network of a res net 34), and determining a second image feature vector according to the output of the matching algorithm network; wherein the first image feature vector and the second image feature vector are illustratively 128-dimensional vectors. And determining the matching degree of the vehicle according to the distance between the first image feature vector and the second image feature vector. For example, the euclidean distance or mahalanobis distance between the two image feature vectors is used as the vehicle matching degree. It should be noted that, the vehicle matching degree can be obtained quickly and accurately through a small amount of vector calculation.
In the embodiment, in a vehicle matching scene, besides introducing an image segmentation algorithm to eliminate the influence of an interference object on vehicle matching, the vehicle matching is performed by determining the vehicle intersection region and utilizing the vehicle region image of the vehicle intersection region, so that the purpose of performing vehicle matching calculation according to the characteristics of the same position of the vehicle is realized, and the accuracy of the vehicle matching degree can be further improved.
Fig. 3 is a flow chart of another vehicle matching method according to an embodiment of the present disclosure, which is further optimized on the basis of the above-described embodiments. As shown in fig. 3, the method specifically includes the following steps:
s301, acquiring a reference vehicle image and a vehicle image to be matched.
S302, performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result.
S303, performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result.
S304, determining a vehicle intersection area between the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
The specific implementation process of steps S301 to S304 may be referred to the description of the above embodiment, and will not be repeated here.
S305, determining the area occupation ratio of the vehicle region image in the vehicle intersection region in the reference vehicle image and the vehicle image to be matched respectively.
And S306, if the area ratio is smaller than a preset threshold value, determining that the vehicle matching degree between the reference vehicle image and the vehicle image to be matched is zero.
In this embodiment, the size of the intersection area of the vehicle may reflect the situation that the vehicle is blocked by the vehicle, and if the intersection area is too small, it indicates that the vehicle is severely blocked. Considering that if the vehicle is severely blocked, the leaked part is too few, the vehicle image of the intersection region cannot reflect the basic characteristics of the vehicle, so that the matching degree of the vehicles of the two images cannot be judged, and the matching degree can be directly considered to be zero. And whether the vehicle is seriously blocked or not is determined, the area ratio of the vehicle area image in the vehicle intersection area in the reference vehicle image and the vehicle image to be matched can be determined respectively, and if the area ratio is smaller than a preset threshold (for example, 90%, or other values set according to actual needs, which are not limited in detail herein), the vehicle matching degree between the reference vehicle image and the vehicle image to be matched is determined to be zero.
In this embodiment, if it is determined that the vehicle intersection area between the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed is too small, the vehicle matching degree is directly considered to be zero, and subsequent matching degree calculation is not required. The method and the device realize the processing of the serious shielding condition of the vehicle, and avoid the situation that the matching error is generated due to special conditions.
Fig. 4 is a schematic structural diagram of a vehicle matching device according to an embodiment of the present disclosure, where the embodiment is applicable to a situation of recovering a lost vehicle through vehicle matching in a high-level video parking scenario. The device can realize the vehicle matching method according to any embodiment of the disclosure. As shown in fig. 4, the apparatus 400 specifically includes:
an image acquisition module 401, configured to acquire a reference vehicle image and a vehicle image to be matched;
a first segmentation module 402, configured to perform an instance segmentation on the reference vehicle image, and remove an interference object in the reference vehicle image according to an instance segmentation result;
a second segmentation module 403, configured to perform instance segmentation on the vehicle image to be matched, and remove a disturbing object in the vehicle image to be matched according to an instance segmentation result;
and the matching module 404 is used for determining the matching degree of the vehicle according to the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
Optionally, in some embodiments, the matching module includes:
an intersection area determining unit configured to determine a vehicle intersection area between the reference vehicle image from which the disturbance object is removed and the vehicle image to be matched from which the disturbance object is removed;
and the matching unit is used for determining the matching degree of the vehicle according to the vehicle area images which are respectively positioned in the vehicle intersection areas in the reference vehicle image and the vehicle image to be matched.
Optionally, in some embodiments, the method further comprises:
the duty ratio determining module is used for determining the area duty ratio of the vehicle area image in the vehicle intersection area in the reference vehicle image and the vehicle image to be matched respectively;
and the matching degree judging module is used for determining that the vehicle matching degree between the reference vehicle image and the vehicle image to be matched is zero if the area occupation ratio is smaller than a preset threshold value.
Optionally, in some embodiments, the matching unit is further configured to:
determining a first image feature vector according to a vehicle region image in the vehicle intersection region in the reference vehicle image;
determining a second image feature vector according to the vehicle region image in the vehicle intersection region in the vehicle image to be matched;
and determining the matching degree of the vehicle according to the distance between the first image feature vector and the second image feature vector.
Optionally, in some embodiments, the second segmentation module is further configured to:
preprocessing the vehicle image to be matched, and carrying out instance segmentation based on a preset instance segmentation algorithm to obtain an instance segmentation result;
determining a vehicle object to be matched according to the example segmentation result;
and carrying out gray scale processing on the areas except the vehicle object in the image to be matched so as to remove the interference object.
Optionally, in some embodiments, the vehicle image to be matched and the reference vehicle image are acquired by an image acquisition device in an overhead video parking lot.
The product can execute the method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of executing the method.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as a vehicle matching method. For example, in some embodiments, the vehicle matching method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the vehicle matching method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the vehicle matching method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligent software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Cloud computing (cloud computing) refers to a technical system that a shared physical or virtual resource pool which is elastically extensible is accessed through a network, resources can comprise servers, operating systems, networks, software, applications, storage devices and the like, and resources can be deployed and managed in an on-demand and self-service mode. Through cloud computing technology, high-efficiency and powerful data processing capability can be provided for technical application such as artificial intelligence and blockchain, and model training.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions provided by the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A vehicle matching method, comprising:
acquiring a reference vehicle image and a vehicle image to be matched;
performing instance segmentation on the reference vehicle image, and removing interference objects in the reference vehicle image according to an instance segmentation result;
performing instance segmentation on the vehicle image to be matched, and removing interference objects in the vehicle image to be matched according to an instance segmentation result;
and determining the matching degree of the vehicle according to the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed.
2. The method of claim 1, wherein the determining the degree of vehicle matching from the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed comprises:
determining a vehicle intersection region between the reference vehicle image from which the interference object is removed and the vehicle image to be matched from which the interference object is removed;
and determining the vehicle matching degree according to the vehicle region images respectively in the vehicle intersection region in the reference vehicle image and the vehicle image to be matched.
3. The method of claim 2, further comprising:
determining the area occupation ratio of the vehicle region image in the vehicle intersection region in the reference vehicle image and the vehicle image to be matched respectively;
and if the area occupation ratio is smaller than a preset threshold value, determining that the vehicle matching degree between the reference vehicle image and the vehicle image to be matched is zero.
4. The method of claim 2, wherein determining a vehicle matching degree from vehicle region images respectively within the vehicle intersection region in the reference vehicle image and the vehicle image to be matched comprises:
determining a first image feature vector according to a vehicle region image in the vehicle intersection region in the reference vehicle image;
determining a second image feature vector according to the vehicle region image in the vehicle intersection region in the vehicle image to be matched;
and determining the matching degree of the vehicle according to the distance between the first image feature vector and the second image feature vector.
5. The method of claim 1, wherein performing instance segmentation on the vehicle image to be matched, and removing the interference object in the vehicle image to be matched according to an instance segmentation result, comprises:
preprocessing the vehicle image to be matched, and carrying out instance segmentation based on a preset instance segmentation algorithm to obtain an instance segmentation result;
determining a vehicle object to be matched according to the example segmentation result;
and carrying out gray scale processing on the areas except the vehicle object in the image to be matched so as to remove the interference object.
6. The method of any of claims 1-5, wherein the vehicle image to be matched and the reference vehicle image are acquired by an image acquisition device in an overhead video parking lot.
7. A vehicle matching device comprising:
the image acquisition module is used for acquiring a reference vehicle image and a vehicle image to be matched;
the first segmentation module is used for carrying out instance segmentation on the reference vehicle image and removing interference objects in the reference vehicle image according to an instance segmentation result;
the second segmentation module is used for carrying out instance segmentation on the vehicle image to be matched and removing interference objects in the vehicle image to be matched according to an instance segmentation result;
and the matching module is used for determining the matching degree of the vehicle according to the reference vehicle image with the interference object removed and the vehicle image to be matched with the interference object removed.
8. The apparatus of claim 7, wherein the matching module comprises:
an intersection area determining unit configured to determine a vehicle intersection area between the reference vehicle image from which the disturbance object is removed and the vehicle image to be matched from which the disturbance object is removed;
and the matching unit is used for determining the matching degree of the vehicle according to the vehicle area images which are respectively positioned in the vehicle intersection areas in the reference vehicle image and the vehicle image to be matched.
9. The apparatus of claim 8, further comprising:
the duty ratio determining module is used for determining the area duty ratio of the vehicle area image in the vehicle intersection area in the reference vehicle image and the vehicle image to be matched respectively;
and the matching degree judging module is used for determining that the vehicle matching degree between the reference vehicle image and the vehicle image to be matched is zero if the area occupation ratio is smaller than a preset threshold value.
10. The apparatus of claim 8, wherein the matching unit is further to:
determining a first image feature vector according to a vehicle region image in the vehicle intersection region in the reference vehicle image;
determining a second image feature vector according to the vehicle region image in the vehicle intersection region in the vehicle image to be matched;
and determining the matching degree of the vehicle according to the distance between the first image feature vector and the second image feature vector.
11. The apparatus of claim 7, wherein the second segmentation module is further to:
preprocessing the vehicle image to be matched, and carrying out instance segmentation based on a preset instance segmentation algorithm to obtain an instance segmentation result;
determining a vehicle object to be matched according to the example segmentation result;
and carrying out gray scale processing on the areas except the vehicle object in the image to be matched so as to remove the interference object.
12. The apparatus of any of claims 7-11, wherein the vehicle image to be matched and the reference vehicle image are acquired by an image acquisition device in an overhead video parking lot.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle matching method of any one of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the vehicle matching method according to any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the vehicle matching method according to any one of claims 1-6.
CN202310343923.6A 2023-03-31 2023-03-31 Vehicle matching method and device, electronic equipment and storage medium Pending CN116363400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310343923.6A CN116363400A (en) 2023-03-31 2023-03-31 Vehicle matching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310343923.6A CN116363400A (en) 2023-03-31 2023-03-31 Vehicle matching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116363400A true CN116363400A (en) 2023-06-30

Family

ID=86937673

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310343923.6A Pending CN116363400A (en) 2023-03-31 2023-03-31 Vehicle matching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116363400A (en)

Similar Documents

Publication Publication Date Title
CN113674421B (en) 3D target detection method, model training method, related device and electronic equipment
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113012176B (en) Sample image processing method and device, electronic equipment and storage medium
CN113221768A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
US20220172376A1 (en) Target Tracking Method and Device, and Electronic Apparatus
CN113378857A (en) Target detection method and device, electronic equipment and storage medium
CN112528927A (en) Confidence determination method based on trajectory analysis, roadside equipment and cloud control platform
CN113705381B (en) Target detection method and device for foggy days, electronic equipment and storage medium
CN113052047B (en) Traffic event detection method, road side equipment, cloud control platform and system
CN114037087A (en) Model training method and device, depth prediction method and device, equipment and medium
CN113920158A (en) Training and traffic object tracking method and device of tracking model
CN115578431B (en) Image depth processing method and device, electronic equipment and medium
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN114333409B (en) Target tracking method, device, electronic equipment and storage medium
CN114419564B (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN116168132A (en) Street view reconstruction model acquisition method, device, equipment and medium
CN115222939A (en) Image recognition method, device, equipment and storage medium
CN116363400A (en) Vehicle matching method and device, electronic equipment and storage medium
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113936158A (en) Label matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination