CN113887574A - Object information determination method and device, storage medium and road side equipment - Google Patents

Object information determination method and device, storage medium and road side equipment Download PDF

Info

Publication number
CN113887574A
CN113887574A CN202111065810.1A CN202111065810A CN113887574A CN 113887574 A CN113887574 A CN 113887574A CN 202111065810 A CN202111065810 A CN 202111065810A CN 113887574 A CN113887574 A CN 113887574A
Authority
CN
China
Prior art keywords
target
image
determining
pixel set
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111065810.1A
Other languages
Chinese (zh)
Inventor
师小凯
唐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Elite Road Technology Co ltd
Original Assignee
Beijing Elite Road Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Elite Road Technology Co ltd filed Critical Beijing Elite Road Technology Co ltd
Priority to CN202111065810.1A priority Critical patent/CN113887574A/en
Publication of CN113887574A publication Critical patent/CN113887574A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides an object information determination method, an object information determination device, a storage medium and roadside equipment, and relates to the technical field of computer technology and image processing, in particular to the fields of automatic driving, autonomous parking, internet of things and intelligent transportation. The specific implementation scheme is as follows: determining a first pixel set used for representing a target object in a target image according to the target image comprising the target object, wherein the target image further comprises at least one second pixel set, and each second pixel set is used for representing object identification information; determining a target second pixel set according to the coincidence ratio between the first pixel set and each second pixel set; and determining the object identification information characterized by the target second pixel set as the identification information of the target object.

Description

Object information determination method and device, storage medium and road side equipment
Technical Field
The present disclosure relates to the field of computer technology and image processing technology, and in particular, to the field of autonomous driving, autonomous parking, internet of things, and intelligent transportation, and in particular, to a method and an apparatus for determining object information, a storage medium, and roadside equipment.
Background
The object and the object identifier thereof have a corresponding relationship. The identification information of the object is a number and information registration for each object, and the main role of the object is that the object corresponding to the identification information can be determined from a plurality of objects through the identification information.
Disclosure of Invention
The disclosure provides an object information determination method, an object information determination device, a storage medium and a road side device.
According to an aspect of the present disclosure, there is provided an object information determining method including: determining a first pixel set used for representing a target object in a target image according to the target image comprising the target object, wherein the target image further comprises at least one second pixel set, and each second pixel set is used for representing identification information; determining a target second pixel set according to the coincidence ratio between the first pixel set and each second pixel set; and determining target identification information characterized by the target second pixel set as identification information of the target object.
According to another aspect of the present disclosure, there is provided an object information determination apparatus including: a first determining module, configured to determine, according to a target image including a target object, a first pixel set in the target image for characterizing the target object, where the target image further includes at least one second pixel set, and each second pixel set is used for characterizing identification information; a second determining module, configured to determine a target second pixel set according to a coincidence ratio between the first pixel set and each of the second pixel sets; and a third determining module, configured to determine target identification information represented by the target second pixel set as identification information of the target object.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the object information determination method as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the object information determining method as described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the object information determination method as described above.
According to another aspect of the present disclosure, there is provided a roadside apparatus including the above-described electronic apparatus.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an exemplary system architecture to which the object information determination method and apparatus may be applied, according to an embodiment of the present disclosure;
fig. 2 schematically shows a flow chart of an object information determination method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a schematic diagram of acquiring a target image according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of cropping a first image to a second image, according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic diagram of determining a target object according to one embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of determining a target object according to another embodiment of the present disclosure;
FIG. 7 schematically illustrates an example schematic diagram of determining identification information of a target object, in accordance with an embodiment of the disclosure;
fig. 8 schematically shows a block diagram of an object information determination apparatus according to an embodiment of the present disclosure; and
FIG. 9 illustrates a schematic block diagram of an example electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related object information all conform to the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
It should be noted that, for convenience of description, the following examples describe the embodiments of the present disclosure with an example scenario of determining a license plate of a vehicle. Those skilled in the art can understand that the technical solution of the embodiment of the present disclosure can be applied to any other scenario in which identification information of an object needs to be determined.
With the development of social economy, the vehicle market holding amount will continuously increase, and the problems of difficult parking, effective management of parking spaces and the like will become more and more serious. In the background of this era, many intelligent parking systems have been derived, in which video-based capturing devices have received much attention.
A lot of wisdom parking system on market adopts video mode to catch the vehicle entering position and leaving position on road both sides parking stall, realizes through this mode that the effective management in parking stall has following advantage: the labor investment can be reduced, and the operation cost of the parking system is saved. The parking certificate constructed in a video mode is simple and clear and is approved by vast car owners.
The inventor finds that the intelligent parking system adopting the video mode is limited by 2D image processing and cannot accurately determine the relation between the vehicle and the license plate in the process of realizing the concept disclosed by the invention. When the license plate is selected, the nearest license plate is selected according to the parking space center or the license plate is selected in the target frame, if the current vehicle does not have the license plate information, the license plate of the adjacent parking space or the license plate of the background vehicle is easily used as the license plate information of the parking event. For example, if the distance between the front vehicle and the rear vehicle is short, and the distance between the rear vehicle and the front vehicle is short under the condition that the license plate of the front vehicle is blocked, the license plate of the rear vehicle may be regarded as the license plate of the front vehicle, so that the vehicle a is parked, but the recorded license plate of the vehicle B is the license plate of the vehicle B, so that wrong charging is caused, especially, the neighboring vehicles are charged more, and the user experience of the parking system is reduced.
Fig. 1 schematically illustrates an exemplary system architecture to which the object information determination method and apparatus may be applied according to an embodiment of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios. For example, in another embodiment, an exemplary system architecture to which the object information determination method and apparatus may be applied may include a terminal device, but the terminal device may implement the object information determination method and apparatus provided in the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a knowledge reading application, a web browser application, a search application, an instant messaging tool, a mailbox client, and/or social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for content browsed by the user using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service ("Virtual Private Server", or "VPS" for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that the object information determination method provided by the embodiment of the present disclosure may be generally executed by the terminal device 101, 102, or 103. Accordingly, the object information determination apparatus provided in the embodiments of the present disclosure may also be disposed in the terminal device 101, 102, or 103.
Alternatively, the object information determination method provided by the embodiment of the present disclosure may also be generally executed by the server 105. Accordingly, the object information determination apparatus provided by the embodiments of the present disclosure may be generally disposed in the server 105. The object information determination method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the object information determination apparatus provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, when identification information of a target object needs to be determined, the terminal device 101, 102, 103 may determine, according to a target image including the target object, a first set of pixels in the target image for characterizing the target object, where the target image further includes at least one second set of pixels, and each second set of pixels is used for characterizing one piece of identification information. Then, a target second set of pixels is determined based on a degree of coincidence between the first set of pixels and each second set of pixels. And determining target identification information characterized by the target second pixel set as identification information of the target object. Or by a server or server cluster capable of communicating with the terminal devices 101, 102, 103 and/or the server 105, the target image is analyzed and finally the identification information of the target object is achieved.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flowchart of an object information determination method according to an embodiment of the present disclosure.
As shown in fig. 2, the method may include operations S210 to S230.
In operation S210, a first set of pixels in a target image for characterizing a target object is determined according to the target image including the target object, and the target image further includes at least one second set of pixels, each second set of pixels being used for characterizing an identification information.
In operation S220, a target second pixel set is determined according to a degree of coincidence between the first pixel set and each second pixel set.
In operation S230, target identification information characterized by the target second set of pixels is determined as identification information of the target object.
According to an embodiment of the present disclosure, the target object may include a vehicle, and the identification information may include license plate information. The target image can be determined according to at least one of the video frame including the vehicle collected by the video collecting device, the image including the vehicle collected by the image collecting device and the like. The video capture device and the image capture device include, but are not limited to, at least one of a bolt, a dome camera, and the like. And when the identification information is a license plate, the license plate in the target image can be acquired through a license plate recognition algorithm.
According to an embodiment of the present disclosure, the first set of pixels and the second set of pixels may each comprise a plurality of pixels, e.g., each pixel may have at least one of a pixel chrominance value and a pixel luminance value, etc., and a corresponding pixel position. The degree of coincidence may be determined based on whether the pixel chrominance value and the pixel luminance value of each pixel in the first set of pixels are the same as the pixel chrominance value and the pixel luminance value of each pixel at the same pixel location in the second set of pixels. For example, for a pixel in the first set of pixels and a pixel in the second set of pixels having the same pixel position information, if the pixel chrominance values and the pixel luminance values of the two pixels are also the same, it may be determined that the two pixels coincide with each other. For a pixel in the first set of pixels and a pixel in the second set of pixels having the same pixel location information, at least one of a pixel chrominance value and a pixel luminance value of the two is different, it may be determined that the two pixels are not coincident. By determining a degree of coincidence of each pixel of the first set of pixels and each pixel of the second set of pixels, a degree of coincidence between the first set of pixels and the second set of pixels may be determined.
According to an embodiment of the disclosure, the first set of pixels may characterize a vehicle in the target image and the second set of pixels may characterize a license plate in the target image. In this case, based on a positional relationship between the license plate and the vehicle, for example, the license plate is disposed on the vehicle, the license plate corresponding to each vehicle may be determined according to a coincidence degree of the second pixel set and the first pixel set. For a target vehicle, a second pixel set with the highest degree of coincidence with the first pixel set or greater than a preset threshold value is taken as a target second pixel set in a mode of calculating the degree of coincidence between the second pixel set corresponding to all license plates in a target image and the first pixel set corresponding to the target vehicle, and the license plate corresponding to the target second pixel set is determined as the license plate of the target vehicle.
By the embodiment of the disclosure, the object and the identification information can be associated to the coincidence degree of the first pixel set of the representation object and the second pixel set of the representation identification information in the same target image, the identification information of the target object can be more accurately determined, the problem of low accuracy when the identification information of the target object is determined according to the target detection result is effectively solved, and the accuracy of the object information determination result is improved.
Fig. 3 schematically shows a schematic diagram of acquiring a target image according to an embodiment of the present disclosure.
In the example shown in FIG. 3, parking area 310 includes 8 parking spaces 311-318. The gun and ball linkage 320 includes a gun device 321 and a ball device 322 for capturing vehicles in and out of position in a parking space. The bolt machine 321 may detect the status of each parking space in the parking area 310 in real time, and the ball machine 322 may detect the detailed information of a certain parking space by switching to a close view. When the bolt device 321 detects that the state of a certain parking space is changed, for example, when the bolt device 321 detects that a vehicle enters or leaves the parking space 311, an instruction for detecting the parking space 311 in a close range may be sent to the dome device 322, so as to move the dome device 322 to collect a close range image of the parking space 311.
According to the embodiment of the present disclosure, the target image may be determined according to at least one of a close-up image collected by the dome camera device 322, a video frame image in a real-time video stream collected by the dome camera device 322, and the like.
According to the embodiment of the disclosure, in order to detect the detail information of each parking space more clearly and completely and obtain a more accurate target image, the gun and ball linkage 320 may be used as the No. 1 detection point. Accordingly, a plurality of detection points, such as No. 2,. depending on the requested linkage, may be set for the same parking area 310.
Methods according to embodiments of the present disclosure are further described below in conjunction with specific embodiments.
According to an embodiment of the present disclosure, the target object may include a vehicle, and the identification information may include license plate information. But is not limited thereto.
It should be noted that the vehicle and the license plate information are only exemplary embodiments, but are not limited thereto, and may also include other objects, identification information, and the like, and are not limited herein.
According to an embodiment of the present disclosure, the method for determining the target image may include: a first region associated with the target object is determined. A first image centered on the first region is acquired. A target image is determined from the first image.
According to embodiments of the present disclosure, during initialization of a ball machine device, the ball machine device may be configured to center a target that requires attention within a range detectable by the ball machine device. Therefore, when the target object is subjected to close-range image acquisition by the dome camera device, the dome camera device may first determine a first region related to the target object, and then acquire a first image centered on the first region, where the acquired first image satisfies a central position of the target object in the acquired close-range image. The target image may be the first image.
According to an embodiment of the present disclosure, the target object may be a target vehicle, and the first area may be a parking space where the target vehicle is to stay. For example, after the bolt device is located in the target vehicle, a parking space associated with the target vehicle may be determined, the dome camera device may capture a close-range image including the target vehicle with the parking space as a center, and the image center of the obtained close-range image may be the target vehicle.
By the embodiment of the disclosure, the target image comprising the target object can be efficiently obtained, the target image can be analyzed, and the identification information of the target object can be determined.
According to an embodiment of the present disclosure, determining the target image from the first image includes: and cutting the first image to obtain a second image. The second image comprises a first region, and the ratio of the area of the first region to the area of the second image is greater than a preset threshold. The second image is taken as the target image.
According to the embodiment of the disclosure, for the close-range image collected by the dome camera device, the image including only the first area or the image with most of the image being the first area can be cut out in a cutting mode to be used as the second image. The target image may be the second image.
According to an embodiment of the present disclosure, the preset threshold may be 60%. A person skilled in the art may set the preset threshold according to a ratio of an area corresponding to the maximum desirable range of the first region to an area of the first image, and the preset threshold may also be set by self-definition. The disclosed embodiments are not so limited.
According to the embodiment of the disclosure, after the dome camera device collects the close-range image with the target vehicle as the center, the close-range image can be cut to obtain the image only including the target vehicle as the target image.
By the above-described embodiment of the present disclosure, by cropping the first image, the image including the first region is obtained as the target image, and most regions irrelevant to the target object can be deleted, thereby reducing the amount of calculation in the object information determination process.
According to an embodiment of the present disclosure, at least one first object is included in the first image. The cutting of the first image to obtain the second image comprises the following steps: and carrying out target detection on the first image to obtain a target detection frame aiming at each of at least one first object. And acquiring a target frame closest to the center position of the first image from at least one target detection frame. And cutting the first image according to the second area determined by the target frame to obtain a second image.
According to an embodiment of the present disclosure, Object detection (Object detection) may predict a category of an Object and a position of the Object in an input image, and the position of the Object may be represented in the form of an Object detection box.
Fig. 4 schematically shows a schematic diagram of cropping a first image resulting in a second image according to an embodiment of the disclosure.
As shown in fig. 4, the first image 410 is a close-up image captured by the dome camera device. The first object in the first image 410 includes three vehicles 411, 412, and 413. Object detection is performed on the first image 410, and object detection frames 414, 415, and 416 corresponding to the vehicles 411, 412, and 413, respectively, can be obtained. The target frame closest to the middle position of the first image 410 may be determined as the target detection frame 415 from the first image 410 and the target detection frames 414, 415, and 416, and the currently detected target vehicle may be determined as the vehicle 412 from the target detection frame 415. Cropping the first image 410 according to the object detection box 415 may result in, for example, a second image 420. The resulting second image 420 may include the complete target vehicle 412.
The cropping of the first image according to the target detection frame may be implemented by cropping the first image according to a range defined by the target detection frame, or may be implemented by cropping the first image after expanding the target detection frame outward by a certain width, which is not limited herein.
By means of the method and the device for detecting the target object in the image, the first image is cut according to the target detection frame, and the integrity of the target object in the cut second image can be effectively improved. In addition, since the region irrelevant to the target object is deleted, the amount of calculation in the object information determination process is effectively reduced.
According to an embodiment of the present disclosure, the second image includes at least one second object therein. The second object may comprise a portion of the first object truncated from the first image. The process of determining the target object may be represented as: and carrying out example segmentation on the second image to obtain at least one rectangular frame, wherein each rectangular frame corresponds to a second object. And determining a target rectangular frame with the maximum overlapping rate with the target frame from the at least one rectangular frame. And determining a second object corresponding to the target rectangular frame as the target object.
According to an embodiment of the present disclosure, an instance segmentation (instance segmentation) may be used to predict a category label of each pixel point of an input image. For example, it is possible to predict to which category each pixel belongs, and also to predict different individuals of the same class of object, each individual representing an object, the different individuals being represented by sets of pixels of different categories. In this embodiment, each vehicle and the corresponding license plate may be set to belong to the same category in the case segmentation, each pixel value in the parking scene image may be classified through the case segmentation, and pixels of the same vehicle belong to the same category. The result of the instance segmentation includes a rectangular box corresponding to each object and the classified individual objects.
FIG. 5 schematically illustrates a schematic diagram of determining a target object according to one embodiment of the present disclosure.
As shown in fig. 5, the second object in the second image 420 includes a vehicle 412 and portions of vehicles 411, 413. Example segmentation of the second image 420 may result in rectangular frames 511, 512, and 513 corresponding to the vehicle 412 and portions of the vehicles 411, 413, respectively. According to the respective positional relationships of the target frame 415 and the rectangular frames 511, 512, and 513, the target rectangular frame having the largest overlapping rate with the target frame 415 may be determined as the rectangular frame 512. Thus, the vehicle 412 corresponding to the target rectangular frame 512 can be determined as the currently detected target object.
According to the embodiments of the present disclosure, the overlapping ratio of the target frame 415 and the rectangular frames 511, 512, 513 may be calculated according to an IOU (interaction over Union), a criterion for detecting the accuracy of the corresponding object in a specific data set. It may also be determined by setting a predefined coordinate system for the first image 410 and comparing the coordinate information of the target frame 415 with the coordinate information of the respective rectangular frames 511, 512, 513. And are not limited herein.
Through the embodiment of the disclosure, the target object is determined by combining the target frame obtained by target detection and the rectangular frame obtained by example segmentation, and the accuracy of the determined target object can be effectively improved by simpler calculation amount.
According to an embodiment of the present disclosure, the process of determining the target object may further be expressed as: and performing example segmentation on the second image to obtain at least one third pixel set, wherein each third pixel set is used for representing a pixel corresponding to a second object. And determining a target third pixel set from the at least one third pixel set, wherein the coincidence degree between the pixels included in the target third pixel set and the pixels included in the target frame is greater than the coincidence degree between the pixels included in the other pixel sets except the target third pixel set in the at least one third pixel set and the pixels included in the target frame. And determining a second object characterized by the target third set of pixels as the target object.
Fig. 6 schematically illustrates a schematic diagram of determining a target object according to another embodiment of the present disclosure.
As shown in fig. 6, the second object in the second image 420 includes a vehicle 412 and portions of vehicles 411, 413. Example segmentation is performed on the second image 420 to obtain classification individuals 611, 612 and 613 corresponding to the vehicle 412 and the part of the vehicles 411 and 413, respectively, and each classification individual 611, 612 and 613 constitutes a third pixel set. According to the degree of coincidence between the pixel included in the target frame 415 and the pixels included in each of the third pixel sets 611, 612, and 613, the target third pixel set having the greatest degree of coincidence with the pixel included in the target frame 415 may be determined as the third pixel set 612. The vehicle 412 corresponding to the third pixel set 612 can thus be determined to be the currently detected target object.
Through the above embodiments of the present disclosure, the target object is determined by combining the pixels included in the target frame obtained by target detection and the third pixel set obtained by instance segmentation, and the accuracy of determining the target object can be effectively improved.
According to an embodiment of the present disclosure, determining the target second set of pixels according to a degree of coincidence between the first set of pixels and each second set of pixels comprises: based on a predefined coordinate system, first coordinates of pixels in the first set of pixels are determined. For each second set of pixels, second coordinates of pixels in the second set of pixels are determined based on a predefined coordinate system, resulting in at least one second coordinate. And determining the target coordinate with the highest coincidence degree with the first coordinate from the at least one second coordinate. A second set of pixels characterized by the target coordinates is determined as a target second set of pixels.
According to an embodiment of the present disclosure, the first set of pixels and the second set of pixels may be sets determined based on the same or different pixels in the same target image. Since the pixel chrominance values and the pixel luminance values of the pixels at the fixed positions in the same target image are fixed, whether two pixels determined from the same target image coincide can be determined by determining whether the pixel coordinates of the two pixels in the target image coincide. Thus, the degree of coincidence between the first set of pixels and the second set of pixels can be determined according to the number of pixels in the second set of pixels that appear in the first set of pixels.
Fig. 7 schematically illustrates an example schematic diagram of determining identification information of a target object according to an embodiment of the present disclosure.
As shown in FIG. 7, a predefined coordinate system is provided for the target image 420, and based on the predefined coordinate system, obtaining first coordinates of pixels in a first set of pixels for characterizing the target vehicle 412 includes (a)1,b1),(a2,b2),(a3,b3),(a4,b4),(a5,b5),(a6,b6),(a7,b7),(a8,b8) Obtaining second coordinates of pixels in a second set of pixels characterizing license plate XXXX comprises (a)1,b1),(a2,b2),(a3,b3) Obtaining second coordinates of pixels in a second set of pixels for characterizing a license plate BBBB includes (c)1,d1),(c2,d2),(c3,d3),(c4,d4). Can determine the second coordinate (a)1,b1),(a2,b2),(a3,b3) All three pixel coordinates in (a) appear in the aforementioned first coordinate, the second coordinate (c)1,d1),(c2,d2),(c3,d3),(c4,d4) Is not present in the aforementioned first coordinates. Whereby the second coordinates (a) can be determined1,b1),(a2,b2),(a3,b3) Has higher coincidence degree with the first coordinate, and the target coordinate is (a)1,b1),(a2,b2),(a3,b3) A target second pixel set for determining license plate information of the target vehicle is (a)1,b1),(a2,b2),(a3,b3). It is thus possible to determine that the license plate information of the target vehicle 412 is XXXX.
By the embodiment of the disclosure, the predefined coordinate system is introduced, and the coincidence degree of the first pixel set and the second pixel set can be determined only through the pixel coordinates, so that the identification information of the target object is determined, and the accuracy of the object information determination result is improved on the basis of reducing the data volume.
Fig. 8 schematically shows a block diagram of an object information determination apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the object information determining apparatus 800 includes a first determining module 810, a second determining module 820, and a third determining module 830.
A first determining module 810, configured to determine, according to a target image including a target object, a first set of pixels in the target image for characterizing the target object. The target image further comprises at least one second set of pixels, each second set of pixels being used for characterizing one identification information.
A second determining module 820, configured to determine a target second pixel set according to a coincidence ratio between the first pixel set and each second pixel set.
A third determining module 830, configured to determine target identification information represented by the target second pixel set as identification information of the target object.
According to an embodiment of the present disclosure, the second determination module includes a first determination unit, a second determination unit, a third determination unit, and a fourth determination unit.
A first determining unit for determining first coordinates of pixels of the first set of pixels based on a predefined coordinate system.
A second determining unit, configured to determine, for each second pixel set, second coordinates of pixels in the second pixel set based on a predefined coordinate system, resulting in at least one second coordinate.
And the third determining unit is used for determining the target coordinate with the highest coincidence degree with the first coordinate from at least one second coordinate.
And the fourth determining unit is used for determining the second pixel set represented by the target coordinates as the target second pixel set.
According to an embodiment of the present disclosure, the object information determination apparatus further includes a fourth determination module, an acquisition module, and a fifth determination module.
A fourth determination module to determine a first region associated with the target object.
The acquisition module is used for acquiring a first image taking the first area as the center.
And the fifth determining module is used for determining the target image according to the first image.
According to an embodiment of the present disclosure, the fifth determining module includes an obtaining unit and a defining unit.
And the obtaining unit is used for cutting the first image to obtain a second image, the second image comprises a first region, and the ratio of the area of the first region to the area of the second image is greater than a preset threshold value.
And a definition unit for taking the second image as the target image.
According to an embodiment of the present disclosure, at least one first object is included in the first image. The obtaining unit comprises a first obtaining subunit, an obtaining subunit and a clipping subunit.
And the first obtaining subunit is used for carrying out target detection on the first image to obtain a target detection frame aiming at each of the at least one first object.
And the acquisition subunit is used for acquiring a target frame closest to the center position of the first image from at least one target detection frame.
And the cutting subunit is used for cutting the first image according to the second area determined by the target frame to obtain a second image.
According to an embodiment of the present disclosure, at least one second object is included in the second image. The object information determination apparatus further includes a second obtaining subunit, a first determining subunit, and a second determining subunit.
The second obtaining subunit is used for carrying out example segmentation on the second image to obtain at least one rectangular frame, and each rectangular frame corresponds to a second object;
a first determining subunit, configured to determine, from the at least one rectangular frame, a target rectangular frame with a largest overlapping rate with the target frame; and
and a second determining subunit, configured to determine a second object corresponding to the target rectangular frame as the target object.
According to an embodiment of the present disclosure, the object information determining apparatus further includes a third obtaining subunit, a third determining subunit, and a fourth determining subunit.
And the third obtaining subunit is configured to perform instance segmentation on the second image to obtain at least one third pixel set, where each third pixel set is used to represent a pixel corresponding to a second object.
A third determining subunit, configured to determine, from the at least one third pixel set, a target third pixel set, where a coincidence degree between a pixel included in the target third pixel set and a pixel included in the target frame is greater than a coincidence degree between a pixel included in a pixel set other than the target third pixel set in the at least one third pixel set and a pixel included in the target frame.
And the fourth determining subunit is used for determining the second object characterized by the target third pixel set as the target object.
According to an embodiment of the present disclosure, the target object includes a vehicle, and the identification information includes license plate information.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to an embodiment of the present disclosure, a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described above.
According to an embodiment of the disclosure, a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 901 performs the respective methods and processes described above, such as the object information determination method. For example, in some embodiments, the object information determination method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into the RAM 903 and executed by the computing unit 901, one or more steps of the object information determination method described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the object information determination method by any other suitable means (e.g., by means of firmware).
According to an embodiment of the present disclosure, the present disclosure also provides a roadside apparatus, which may include the electronic apparatus provided by the embodiment of the present disclosure.
The roadside device may include a communication unit and the like in addition to the electronic device, and the electronic device may be integrated with the communication unit or may be provided separately. The electronic device may acquire data, such as pictures and videos, from a sensing device (e.g., a roadside camera) for image video processing and data computation. Optionally, the electronic device itself may also have a sensing data acquisition function and a communication function, for example, an AI camera, and the electronic device may directly perform image video processing and data calculation based on the acquired sensing data.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (20)

1. An object information determination method, comprising:
determining a first pixel set used for representing a target object in a target image according to the target image comprising the target object, wherein the target image further comprises at least one second pixel set, and each second pixel set is used for representing identification information;
determining a target second pixel set according to the coincidence ratio between the first pixel set and each second pixel set; and
and determining target identification information represented by the target second pixel set as the identification information of the target object.
2. The method of claim 1, wherein the determining a target second set of pixels according to a degree of coincidence between the first set of pixels and each of the second sets of pixels comprises:
determining first coordinates of pixels in the first set of pixels based on a predefined coordinate system;
determining, for each of the second sets of pixels, second coordinates of pixels in the second set of pixels based on the predefined coordinate system, resulting in at least one second coordinate;
determining a target coordinate with the highest coincidence degree with the first coordinate from the at least one second coordinate; and
determining a second set of pixels characterized by the target coordinates as the target second set of pixels.
3. The method of claim 1 or 2, further comprising:
determining a first region associated with the target object;
acquiring a first image with the first area as a center; and
determining the target image from the first image.
4. The method of claim 3, wherein the determining the target image from the first image comprises:
cutting the first image to obtain a second image, wherein the second image comprises the first region, and the ratio of the area of the first region to the area of the second image is greater than a preset threshold value; and
and taking the second image as the target image.
5. The method of claim 4, wherein the first image includes at least one first object therein;
the cutting the first image to obtain a second image comprises:
carrying out target detection on the first image to obtain a target detection frame aiming at each of the at least one first object;
acquiring a target frame closest to the center position of the first image from the at least one target detection frame; and
and according to the second area determined by the target frame, cutting the first image to obtain the second image.
6. The method of claim 5, wherein the second image includes at least one second object therein;
the method further comprises the following steps:
performing example segmentation on the second image to obtain at least one rectangular frame, wherein each rectangular frame corresponds to one second object;
determining a target rectangular frame with the maximum overlapping rate with the target frame from the at least one rectangular frame; and
and determining a second object corresponding to the target rectangular frame as the target object.
7. The method of claim 5, wherein the second image includes at least one second object therein;
the method further comprises the following steps:
performing example segmentation on the second image to obtain at least one third pixel set, wherein each third pixel set is used for representing a pixel corresponding to one second object;
determining a target third pixel set from the at least one third pixel set, wherein a coincidence degree between the pixels included in the target third pixel set and the pixels included in the target frame is greater than a coincidence degree between the pixels included in the other pixel sets except the target third pixel set in the at least one third pixel set and the pixels included in the target frame; and
determining a second object characterized by the target third set of pixels as the target object.
8. The method of any of claims 1-7, wherein the target object comprises a vehicle and the identification information comprises license plate information.
9. An object information determination apparatus comprising:
a first determining module, configured to determine, according to a target image including a target object, a first pixel set in the target image for characterizing the target object, where the target image further includes at least one second pixel set, and each second pixel set is used for characterizing identification information;
a second determining module, configured to determine a target second pixel set according to a coincidence ratio between the first pixel set and each of the second pixel sets; and
and a third determining module, configured to determine target identification information represented by the target second pixel set as identification information of the target object.
10. The apparatus of claim 9, wherein the second determining means comprises:
a first determining unit for determining first coordinates of pixels of the first set of pixels based on a predefined coordinate system;
a second determining unit, configured to determine, for each second pixel set, second coordinates of pixels in the second pixel set based on the predefined coordinate system, resulting in at least one second coordinate;
a third determining unit, configured to determine, from the at least one second coordinate, a target coordinate with the highest degree of coincidence with the first coordinate; and
a fourth determining unit, configured to determine the second pixel set characterized by the target coordinates as the target second pixel set.
11. The apparatus of claim 9 or 10, further comprising:
a fourth determination module for determining a first region associated with the target object;
an acquisition module, configured to acquire a first image centered on the first region; and
a fifth determining module, configured to determine the target image according to the first image.
12. The apparatus of claim 11, wherein the fifth determining means comprises:
an obtaining unit, configured to crop the first image to obtain a second image, where the second image includes the first region, and a ratio of an area of the first region to an area of the second image is greater than a preset threshold; and
a defining unit configured to take the second image as the target image.
13. The apparatus of claim 12, wherein the first image includes at least one first object therein; the obtaining unit includes:
a first obtaining subunit, configured to perform target detection on the first image to obtain a target detection frame for each of the at least one first object;
an obtaining subunit, configured to obtain, from the at least one target detection frame, a target frame closest to a center position of the first image; and
and the cutting subunit is used for cutting the first image according to the second area determined by the target frame to obtain the second image.
14. The apparatus of claim 13, wherein the second image includes at least one second object therein;
the device further comprises:
a second obtaining subunit, configured to perform instance segmentation on the second image to obtain at least one rectangular frame, where each rectangular frame corresponds to one second object;
a first determining subunit, configured to determine, from the at least one rectangular frame, a target rectangular frame with a largest overlapping rate with the target frame; and
a second determining subunit, configured to determine a second object corresponding to the target rectangular frame as the target object.
15. The apparatus of claim 13, wherein the second image includes at least one second object therein;
the device further comprises:
a third obtaining subunit, configured to perform instance segmentation on the second image to obtain at least one third pixel set, where each third pixel set is used to represent a pixel corresponding to one second object;
a third determining subunit, configured to determine a target third pixel set from the at least one third pixel set, where a coincidence degree between a pixel included in the target third pixel set and a pixel included in the target frame is greater than a coincidence degree between a pixel included in a pixel set other than the target third pixel set in the at least one third pixel set and a pixel included in the target frame; and
a fourth determining subunit, configured to determine a second object characterized by the target third pixel set as the target object.
16. The apparatus of any of claims 9 to 15, wherein the target object comprises a vehicle and the identification information comprises license plate information.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
20. A roadside apparatus comprising the electronic apparatus according to claim 17.
CN202111065810.1A 2021-09-10 2021-09-10 Object information determination method and device, storage medium and road side equipment Pending CN113887574A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111065810.1A CN113887574A (en) 2021-09-10 2021-09-10 Object information determination method and device, storage medium and road side equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111065810.1A CN113887574A (en) 2021-09-10 2021-09-10 Object information determination method and device, storage medium and road side equipment

Publications (1)

Publication Number Publication Date
CN113887574A true CN113887574A (en) 2022-01-04

Family

ID=79009103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111065810.1A Pending CN113887574A (en) 2021-09-10 2021-09-10 Object information determination method and device, storage medium and road side equipment

Country Status (1)

Country Link
CN (1) CN113887574A (en)

Similar Documents

Publication Publication Date Title
WO2021135879A1 (en) Vehicle data monitoring method and apparatus, computer device, and storage medium
CN113205037B (en) Event detection method, event detection device, electronic equipment and readable storage medium
CN114943936B (en) Target behavior recognition method and device, electronic equipment and storage medium
CN113299073B (en) Method, device, equipment and storage medium for identifying illegal parking of vehicle
US11810320B2 (en) Method and apparatus for determining location of signal light, storage medium, program and roadside device
CN112966599B (en) Training method of key point recognition model, key point recognition method and device
CN111666821B (en) Method, device and equipment for detecting personnel aggregation
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN111079621B (en) Method, device, electronic equipment and storage medium for detecting object
CN110602446A (en) Garbage recovery reminding method and system and storage medium
CN116129328A (en) Method, device, equipment and storage medium for detecting carryover
CN113901911B (en) Image recognition method, image recognition device, model training method, model training device, electronic equipment and storage medium
CN108288025A (en) A kind of car video monitoring method, device and equipment
CN113807228A (en) Parking event prompting method and device, electronic equipment and storage medium
CN113470013A (en) Method and device for detecting moved article
CN113052048A (en) Traffic incident detection method and device, road side equipment and cloud control platform
CN113887574A (en) Object information determination method and device, storage medium and road side equipment
CN114639143A (en) Portrait filing method, equipment and storage medium based on artificial intelligence
CN113360688B (en) Method, device and system for constructing information base
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform
CN117615363B (en) Method, device and equipment for analyzing personnel in target vehicle based on signaling data
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN113936258A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113887331A (en) Image processing method, event detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination