CN113158728B - Parking space state detection method based on gray level co-occurrence matrix - Google Patents

Parking space state detection method based on gray level co-occurrence matrix Download PDF

Info

Publication number
CN113158728B
CN113158728B CN202011617045.5A CN202011617045A CN113158728B CN 113158728 B CN113158728 B CN 113158728B CN 202011617045 A CN202011617045 A CN 202011617045A CN 113158728 B CN113158728 B CN 113158728B
Authority
CN
China
Prior art keywords
gray
state
vehicle
effective area
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011617045.5A
Other languages
Chinese (zh)
Other versions
CN113158728A (en
Inventor
华璟
俞庭
彭浩宇
胡峥
吕佳俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Tuge Technology Co ltd
Original Assignee
Hangzhou Tuge Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Tuge Technology Co ltd filed Critical Hangzhou Tuge Technology Co ltd
Priority to CN202011617045.5A priority Critical patent/CN113158728B/en
Publication of CN113158728A publication Critical patent/CN113158728A/en
Application granted granted Critical
Publication of CN113158728B publication Critical patent/CN113158728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a parking space state detection method based on a gray level co-occurrence matrix, which comprises the steps of obtaining video stream images; converting the video stream image into a single-channel gray scale image; obtaining a gray matrix of each frame of image and extracting an area gray matrix where a parking space is located from the complete gray matrix; normalizing the regional gray matrix and calculating a gray co-occurrence matrix thereof; and calculating corresponding characteristics according to the gray level co-occurrence matrix and evaluating the state of the parking space. The method is suitable for judging whether the parking space with clear view is an empty parking space or not by utilizing the image, can evaluate the state of the parking space rapidly and accurately, and reduces errors caused by illumination influence to a certain extent.

Description

Parking space state detection method based on gray level co-occurrence matrix
[ field of technology ]
The invention belongs to the field of digital image processing, and particularly relates to a parking space state detection method based on a gray level co-occurrence matrix.
[ background Art ]
With the improvement of the living standard of people and the improvement of the consumption capability of automobiles, automobiles enter more families. Correspondingly, the demand of people for parking spaces is higher and higher, and the contradiction between supply and demand of parking space resources is stronger and stronger. And with the development of automatic driving technologies such as well, parking space monitoring and parking guidance technologies have also gradually revealed their necessity. How to perform real-time monitoring of parking spaces, the use condition of the parking spaces is monitored accurately in real time, and induction and allocation of vehicles are important topics.
In recent years, the management of private closed parking lots relies on an intelligent gate, and the parking lots of the type have small number of entrances and exits due to the closed places, so that the parking lots in the stadium can be counted and managed to a certain degree by the intelligent gate. However, most parking lots only calculate the information of the remaining space for parking space management, and the parking lots capable of positioning the idle parking space and guiding the vehicles entering the parking space are fewer. However, the situation is worse for open parking spaces, and the phenomenon of parking spaces and messy parking cannot be found frequently occurs.
How to utilize the internet of things to form real-time parking space monitoring network with the equipment that monitors the real-time state of parking stall through server and terminal equipment etc. satisfies the real-time inquiry empty parking stall of people or nearby parking stall state, carries out the distribution of parking stall, carries out the induction, navigation to the vehicle, has very big realistic meaning.
The statistical traffic flow at the entrance of the parking lot can only know the allowance of the current parking space, and the situation of the empty parking space cannot be counted in real time. The installation of sensors in the parking space places relatively high demands on the environment in the vicinity and is very susceptible to disturbances. Therefore, it is necessary to design a parking space state detection method which can adapt to various environments and has strong anti-interference capability.
[ invention ]
Based on the defects of the prior art, the invention provides a parking space state detection method based on a gray level co-occurrence matrix, which monitors the real-time state of a parking space of a parking lot by erecting a monitoring camera at a proper place on parking and analyzing the parking space on an image through an image processing technology.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a parking space state detection method based on a gray level co-occurrence matrix comprises the following steps:
SO1: acquiring images of video streams of a parking space through a monitoring camera arranged on the parking space, selecting a plurality of frames of images as sample images, and other images as images to be detected, and acquiring an effective area of the parking space in the visual field of the sample images of each frame, wherein the effective area is the largest inscribed rectangle in a boundary quadrangle of the parking space in the visual field, the effective area has a vehicle-on state and a vehicle-off state, the vehicle-on state is a sample A, and the vehicle-off state is a sample B;
SO2: converting the sample image into a single-channel gray level image by an average method, and carrying out normalized conversion on the gray level of the gray level image to enable the gray level image to beExtracting a gray matrix of the effective area from the normalized gray image, calculating a gray co-occurrence matrix of the gray matrix of the effective area from the gray matrix of the effective area, and calculating a first feature quantity F from the gray co-occurrence matrix, the first feature quantity F including: angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON Obtaining a vehicle state feature set M and a vehicle-free state feature set N of each first feature quantity F;
SO3: calculating a sample center O of a first characteristic quantity F of a corresponding type according to each characteristic set M and N full 、O empty And calculates an effective range R of each first feature quantity full 、R empty
SO4: o for each first feature quantity F full 、O empty According to the proportional relation between the effective range and the sample center, calculating the demarcation point flag of the vehicle-on state and the vehicle-free state of the vehicle-on state respectively;
SO5: calculating a second feature quantity F 'of the image to be detected according to the step SO2, wherein the second feature quantity F' comprises an angular second-order torch F '' ASM Entropy F' ENT Reverse differential torch F' IDM Contrast F' CON For each second characteristic quantity F ', calculating O of the corresponding type of second characteristic quantity F' according to the steps SO3 and SO4 full 、O empty And a demarcation point flag, and performing probability evaluation of the 'on-vehicle' state to calculate the second-order angle torch probability P of the image to be detected ASM Entropy probability P ENT Probability of reverse differential torch P IDM Contrast probability P CON The four probabilities each represent a probability that the parking space estimated from the second feature quantity F' is on-coming;
SO6: and (3) carrying out weight distribution on the probability of the 'on-vehicle' state estimated by each second characteristic quantity, calculating the weighted sum of the probabilities as the total probability P of the 'on-vehicle' state of the effective area under the frame image, and when the P is larger than a critical value, the parking space is in the 'on-vehicle' state, wherein the critical value is larger than or equal to 0.8, and if the effective area in the images to be detected read out for more than three continuous seconds is in the 'on-vehicle' state, considering the parking space to be occupied.
Further, the step SO2 includes the following sub-steps:
(1) converting RGB three channels of the sample image into a single-channel gray image by an average method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and carrying out normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
wherein A (i, j) is an element in the two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
(2) finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image, and the width and height of the pixel coordinate starting point, and storing the elements in the corresponding range into a new array from the two-dimensional array in the step (1), wherein the new array is the gray matrix of the effective area;
(3) finding the maximum value maxGrayLevel and the minimum value minGrayLevel of the gray scale from the gray scale matrix of the effective area;
(4) traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src and saving the gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16),
wherein A '(i, j) is a gray value in a gray matrix of the effective area, and B' (i, j) is a gray value to be solved;
(5) creating a new matrix of 16 x 16, wherein the initial value of each element is 0, traversing the gray matrix of the effective area, selecting a pixel pair B (i, j) and B (i, j+1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j+1) is n, and C (m, n) +1, wherein after the gray matrix of the effective area is traversed, the obtained new matrix is the gray commonA green matrix; (6) respectively calculating the angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON
F ASM =sum(C(i,j)^2);F ENT =sum(C(i,j)*(-logC(i,j)));
F IDM =sum(C(i,j)/(1+(i-j)^2));F CON =sum((i-j)^2*C(i,j))。
Further, the step SO3 comprises the following sub-steps:
(1) obtaining a feature quantity Fn of an effective area of each frame of image of a sample A from a feature set M, wherein n=1, 2,3 … N, and N is the total number of the sample A;
(2) removing a maximum value and a minimum value in the sample A, and calculating an average value of the feature values of the effective area in the new sample A as a sample center O full
Figure RE-GDA0003076810370000031
Wherein, N' is the sample number after removing the maximum and minimum values;
(3) calculating each characteristic quantity in the sample A and O respectively full And find the maximum value, which is the effective range R of each characteristic quantity of the car state full
(4) According to steps (1) - (3), a sample center O of sample B is calculated empty And effective range R empty
Further, in the step SO4, the demarcation point flag is calculated according to the following formula:
Figure RE-GDA0003076810370000032
further, the probability of the vehicle state of the second feature quantity F' is calculated as follows:
Figure RE-GDA0003076810370000033
Figure RE-GDA0003076810370000034
Figure RE-GDA0003076810370000035
Figure RE-GDA0003076810370000036
wherein O is empty The data center is in a vehicle-free state, and the flag is a state demarcation point.
Further, the step SO6 comprises the following sub-steps:
(1) calculating a reliability evaluation Q of each feature quantity, wherein the reliability evaluation Q is the distinguishing degree of a certain feature quantity on a vehicle-in state and a vehicle-out state of an effective area, and the higher the distinguishing degree of the vehicle-in state and the vehicle-out state is, the larger the reliability evaluation Q value is, and the calculation formula of the reliability evaluation Q is as follows:
Figure RE-GDA0003076810370000041
/>
o of four characteristic quantities full 、O empty 、R full 、R empty Respectively carrying into the above to obtain reliability evaluation Q of each characteristic quantity ASM 、Q ENT 、Q DIM 、Q CON
(2) And calculating probability weight W of each feature quantity according to the reliability evaluation quantity Q:
Figure RE-GDA0003076810370000042
(3) calculating a weighted sum P of the probabilities:
P=P asm *W asm +P ent *W ent +P idm *W idm +P con *W con
after the technical scheme is adopted, the invention has the following advantages:
the equipment only needs to rely on the camera, and the same camera can be used for monitoring a plurality of parking spaces, and the effect that 4-8 parking spaces only need to rely on one camera can be achieved according to the field conditions.
The user can erect the camera of gathering the image according to own demand, erects the camera according to multiple demands such as different places correspondingly to can not occupy the region of parking stall, the possibility of damage is low, compares in traditional sensor monitoring mode can effectual reduce cost.
The camera used for collecting images can be erected to carry other functions, such as providing security monitoring, adding license plate recognition function and the like. Can achieve multiple functions.
These features and advantages of the present invention will be disclosed in more detail in the following detailed description and the accompanying drawings. The best mode or means of the present invention will be described in detail with reference to the accompanying drawings, but is not limited to the technical scheme of the present invention. In addition, these features, elements, and components are shown in plural in each of the following and drawings, and are labeled with different symbols or numerals for convenience of description, but each denote a component of the same or similar construction or function.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and not limit the application and the description of the exemplary embodiments.
FIG. 1 is a flow chart of a parking space state detection method based on a gray level co-occurrence matrix;
FIG. 2 is an overall view of a parking spot in the camera field of view;
fig. 3 is a detail view of a single parking spot.
[ detailed description ] of the invention
The technical solutions of the embodiments of the present invention will be explained and illustrated below with reference to the drawings of the embodiments of the present invention, but the following embodiments are only preferred embodiments of the present invention, and not all embodiments. Based on the examples in the implementation manner, other examples obtained by a person skilled in the art without making creative efforts fall within the protection scope of the present invention.
Furthermore, it is to be understood that all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless otherwise indicated. Moreover, unless the context clearly indicates otherwise, singular forms also are intended to include plural forms as the term "comprises" and/or "comprising" is used in this specification to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiment one:
as shown in fig. 1 and fig. 2, the embodiment provides a parking space state detection method based on a gray level co-occurrence matrix, which includes the following steps:
SO1: the method comprises the steps of obtaining images of video streams of parking spaces through cameras erected on parking space accessories and used for collecting video materials of the parking spaces, selecting a plurality of frames of images as sample images, and other images as images to be detected, obtaining an effective area of each frame of sample image, wherein the effective area is the largest inscribed rectangle in a boundary quadrangle of the parking spaces in the field of view, and as shown in fig. 3, the effective area has a vehicle-on state and a vehicle-off state, the vehicle-on state is sample A, and the vehicle-off state is sample B;
SO2: converting the sample image into a single-channel gray image by an average method, carrying out normalization conversion on the gray of the gray image, compressing the gray level of the gray image within 16, extracting a gray matrix of an effective area from the normalized gray image, calculating a gray co-occurrence matrix of the gray matrix of the effective area by the gray matrix of the effective area, and calculating a first feature quantity F by the gray co-occurrence matrix, wherein the first feature quantity F comprises: angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON Obtaining the characteristics of the vehicle state of each first characteristic quantity FA set M and a no-vehicle state feature set N.
The method comprises the following substeps:
(1) converting RGB three channels of the sample image into a single-channel gray image by an average method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and carrying out normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
wherein A (i, j) is an element in the two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
(2) finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image, and the width and height of the pixel coordinate starting point, and storing the elements in the corresponding range into a new array from the two-dimensional array in the step (1), wherein the new array is the gray matrix of the effective area;
(3) finding the maximum value maxGrayLevel and the minimum value minGrayLevel of the gray scale from the gray scale matrix of the effective area;
(4) traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src and saving the gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16),
wherein A '(i, j) is a gray value in a gray matrix of the effective area, and B' (i, j) is a gray value to be solved;
(5) creating a new matrix with initial values of 16 x 16, traversing the gray matrix of the effective area, selecting pixel pairs B (i, j) and B (i, j+1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j+1) is n, and C (m, n) +1 is caused, and after the gray matrix of the effective area is traversed, the obtained new matrix is the gray co-occurrence matrix;
(6) respectively calculating the angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON
F ASM =sum(C(i,j)^2);F ENT =sum(C(i,j)*(-logC(i,j)));
F IDM =sum(C(i,j)/(1+(i-j)^2));F CON =sum((i-j)^2*C(i,j));
SO3: calculating a sample center O of a first characteristic quantity F of a corresponding type according to each characteristic set M and N full 、O empty And calculates an effective range R of each first feature quantity full 、R empty
Step SO3 comprises the following sub-steps:
(1) obtaining the feature quantity F of each frame image effective area of the sample A from the feature set M n Where n=1, 2,3 … N, N is the total number of samples a;
(2) removing a maximum value and a minimum value in the sample A, and calculating an average value of the feature values of the effective area in the new sample A as a sample center O full
Figure RE-GDA0003076810370000061
Wherein, N' is the sample number after removing the maximum and minimum values;
(3) calculating each characteristic quantity in the sample A and O respectively full And find the maximum value, which is the effective range R of each characteristic quantity of the car state full
(4) According to steps (1) - (3), a sample center O of sample B is calculated empty And effective range R empty
SO4: for each O full 、O empty According to the proportional relation between the effective range and the sample center, respectively finding out the boundary point flag of the vehicle-mounted state and the vehicle-free state, wherein the boundary point flag is calculated by the following formula:
Figure RE-GDA0003076810370000062
SO5: calculating a second feature quantity F 'of the image to be detected according to the step SO2, wherein the second feature quantity F' comprises an angular second-order torch F '' ASM Entropy F' ENT Reverse differential torch F' IDM Contrast F' CON For each second characteristic quantity F ', calculating O of the corresponding type of second characteristic quantity F' according to the steps SO3 and SO4 full 、O empty And a demarcation point flag, and performing probability evaluation of the 'on-vehicle' state to calculate the second-order angle torch probability P of the image to be detected ASM Entropy probability P ENT Probability of reverse differential torch P IDM Contrast probability P CON All four probabilities represent the probability of parking space having a car estimated from the second feature quantity F':
Figure RE-GDA0003076810370000063
Figure RE-GDA0003076810370000064
Figure RE-GDA0003076810370000071
Figure RE-GDA0003076810370000072
wherein O is empty The data center is in a vehicle-free state, and the flag is a state demarcation point.
SO6: the probability of the 'on-vehicle' state estimated by each second characteristic quantity is distributed in a weight mode, the weighted sum of the probabilities is calculated to be used as the total probability P of the 'on-vehicle' state of the effective area under the frame image, when the P is larger than a critical value, the parking space is in the 'on-vehicle' state, the critical value is larger than or equal to 0.8, and if the effective area in the images to be detected read out in three continuous seconds is in the 'on-vehicle' state, the parking space is considered to be occupied;
step SO6 comprises the following sub-steps:
(1) calculating a reliability evaluation Q of each feature quantity, wherein the reliability evaluation Q is the distinguishing degree of a certain feature quantity on a vehicle-in state and a vehicle-out state of an effective area, and the higher the distinguishing degree of the vehicle-in state and the vehicle-out state is, the larger the reliability evaluation Q value is, and the calculation formula of the reliability evaluation Q is as follows:
Figure RE-GDA0003076810370000073
/>
o of four characteristic quantities full 、O empty 、R full 、R empty Respectively carrying into the above to obtain reliability evaluation Q of each characteristic quantity ASM 、Q ENT 、Q DIM 、Q CON
(2) And calculating probability weight W of each feature quantity according to the reliability evaluation quantity Q:
Figure RE-GDA0003076810370000074
(3) calculating a weighted sum P of the probabilities:
P=P asm *W asm +P ent *W ent +P idm *W idm +P con *W con
the above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and it should be understood by those skilled in the art that the present invention includes but is not limited to the accompanying drawings and the description of the above specific embodiment. Any modifications which do not depart from the functional and structural principles of the present invention are intended to be included within the scope of the appended claims.

Claims (5)

1. The parking space state detection method based on the gray level co-occurrence matrix is characterized by comprising the following steps of:
SO1: acquiring images of video streams of a parking space through a monitoring camera arranged on the parking space, selecting a plurality of frames of images as sample images, and other images as images to be detected, and acquiring an effective area of the parking space in the visual field of the sample images of each frame, wherein the effective area is the largest inscribed rectangle in a boundary quadrangle of the parking space in the visual field, the effective area has a vehicle-on state and a vehicle-off state, the vehicle-on state is a sample A, and the vehicle-off state is a sample B;
SO2: converting the sample image into a single-channel gray image by an average method, carrying out normalization conversion on the gray level of the gray image, compressing the gray level of the gray image within 16, extracting the gray matrix of the effective area from the normalized gray image, calculating the gray co-occurrence matrix of the gray matrix of the effective area by the gray matrix of the effective area, and calculating a first characteristic quantity F by the gray co-occurrence matrix, wherein the first characteristic quantity F comprises: angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON Obtaining a vehicle state feature set M and a vehicle-free state feature set N of each first feature quantity F;
SO3: calculating a sample center O of a first characteristic quantity F of a corresponding type according to each characteristic set M and N full 、O empty And calculates an effective range R of each first feature quantity full 、R empty
SO4: o for each first feature quantity F full 、O empty According to the proportional relation between the effective range and the sample center, calculating the demarcation point flag of the vehicle-on state and the vehicle-free state of the vehicle-on-vehicle respectively, wherein the demarcation point flag is calculated by the following formula:
Figure FDA0003882580600000011
SO5: calculating a second feature quantity F 'of the image to be detected according to the step SO2, wherein the second feature quantity F' comprises an angular second-order torch F '' ASM Entropy F' ENT Reverse differential torch F' IDM Contrast F' CON For each second characteristic quantity F ', calculating O of the corresponding type of second characteristic quantity F' according to the steps SO3 and SO4 full 、O empty And a demarcation point flag, and performing probability evaluation of the 'on-vehicle' state to calculate the angular second moment probability P of the image to be detected ASM Entropy probability P ENT Probability of reverse differential torch P IDM Contrast probability P CON The four probabilities each represent a probability that the parking space estimated from the second feature quantity F' is on-coming;
SO6: and (3) carrying out weight distribution on the probability of the 'on-vehicle' state estimated by each second characteristic quantity, calculating the weighted sum of the probabilities as the total probability P of the 'on-vehicle' state of the effective area under the frame image, and when the P is larger than a critical value, the parking space is in the 'on-vehicle' state, wherein the critical value is larger than or equal to 0.8, and if the effective area in the images to be detected read out for more than three continuous seconds is in the 'on-vehicle' state, considering the parking space to be occupied.
2. The method according to claim 1, wherein the step SO2 comprises the following sub-steps:
(1) converting RGB three channels of the sample image into a single-channel gray image by an average method, independently storing the converted single-channel gray image into a two-dimensional array, keeping the size of the array consistent with the resolution of the converted single-channel gray image, and carrying out normalization processing on each element in the two-dimensional array, wherein the normalization mode is as follows:
B(i,j)=int((double)(A(i,j)–minSubLevel)/(double)(maxSubLevel–minSubLevel)*16);
wherein A (i, j) is an element in the two-dimensional array, B (i, j) is an element to be solved, minSubLevel is the minimum value of the element, and maxSubLevel is the maximum value of the element;
(2) finding out the pixel coordinate starting point of the effective area in the converted single-channel gray image, and the width and height of the pixel coordinate starting point, and storing the elements in the corresponding range into a new array from the two-dimensional array in the step (1), wherein the new array is the gray matrix of the effective area;
(3) finding the maximum value maxGrayLevel and the minimum value minGrayLevel of the gray scale from the gray scale matrix of the effective area;
(4) traversing the gray matrix of the effective area, carrying out normalization processing on each gray value src and saving the gray value src into the gray matrix of the effective area again, wherein the normalization mode is as follows:
B’(i,j)=int((double)(A’(i,j)–minGrayLevel)/(double)(maxGrayLevel–minGrayLevel)*16)
wherein A '(i, j) is a gray value in a gray matrix of the effective area, and B' (i, j) is a gray value to be solved;
(5) creating a new matrix with initial values of 16 x 16, traversing the gray matrix of the effective area, selecting pixel pairs B (i, j) and B (i, j+1), wherein the value of the pixel B (i, j) is m, the value of the pixel B (i, j+1) is n, and C (m, n) +1 is caused, and after the gray matrix of the effective area is traversed, the obtained new matrix is the gray co-occurrence matrix;
(6) respectively calculating the angular second moment F ASM Entropy F ENT Inverse differential moment F IDM Contrast F CON
F ASM =sum(C(i,j)^2);F ENT =sum(C(i,j)*(-logC(i,j)));
F IDM =sum(C(i,j)/(1+(i-j)^2));F CON =sum((i-j)^2*C(i,j))。
3. The method according to claim 1, wherein said step SO3 comprises the sub-steps of:
(1) obtaining the feature quantity F of each frame image effective area of the sample A from the feature set M n Where n=1, 2,3 … N, N is the total number of samples a;
(2) removing a maximum value and a minimum value in the sample A, and calculating an average value of the feature values of the effective area in the new sample A as a sample center O full
Figure FDA0003882580600000021
Wherein,,n' is the number of samples after removing the maximum and minimum values;
(3) calculating each characteristic quantity in the sample A and O respectively full And find the maximum value, which is the effective range R of each characteristic quantity of the car state full
(4) According to steps (1) - (3), a sample center O of sample B is calculated empty And effective range R empty
4. The method according to claim 1, wherein the calculation of the probability of the vehicle state of the second feature quantity F' is:
Figure FDA0003882580600000031
Figure FDA0003882580600000032
Figure FDA0003882580600000033
Figure FDA0003882580600000034
/>
wherein O is empty The data center is in a vehicle-free state, and the flag is a state demarcation point.
5. The method according to claim 1, wherein said step SO6 comprises the sub-steps of:
(1) calculating a reliability evaluation Q of each feature quantity, wherein the reliability evaluation Q is the distinguishing degree of a certain feature quantity on a vehicle-in state and a vehicle-out state of an effective area, and the higher the distinguishing degree of the vehicle-in state and the vehicle-out state is, the larger the reliability evaluation Q value is, and the calculation formula of the reliability evaluation Q is as follows:
Figure FDA0003882580600000035
o of four characteristic quantities full 、O empty 、R full 、R empty Respectively carrying into the above to obtain reliability evaluation Q of each characteristic quantity ASM 、Q ENT 、Q DIM 、Q CON
(2) And calculating probability weight W of each feature quantity according to the reliability evaluation quantity Q:
Figure FDA0003882580600000036
(3) calculating a weighted sum P of the probabilities:
P=P ASM *W ASM +P ENT *W ENT +P IDM *W IDM +P CON *W CON
CN202011617045.5A 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix Active CN113158728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011617045.5A CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011617045.5A CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Publications (2)

Publication Number Publication Date
CN113158728A CN113158728A (en) 2021-07-23
CN113158728B true CN113158728B (en) 2023-06-09

Family

ID=76878161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011617045.5A Active CN113158728B (en) 2020-12-31 2020-12-31 Parking space state detection method based on gray level co-occurrence matrix

Country Status (1)

Country Link
CN (1) CN113158728B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082443B (en) * 2022-07-25 2022-11-08 山东天意机械股份有限公司 Concrete product quality detection method based on intelligent monitoring platform

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN106600612A (en) * 2016-12-27 2017-04-26 重庆大学 Damage identification and detection method for electric automobile before and after renting

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10514462B2 (en) * 2017-12-13 2019-12-24 Luminar Technologies, Inc. Training a machine learning based model of a vehicle perception component based on sensor settings
CN108563994B (en) * 2018-03-14 2021-09-24 吉林大学 Parking lot parking space identification method based on image similarity
CN109993991A (en) * 2018-11-30 2019-07-09 浙江工商大学 Parking stall condition detection method and system
CN111081064B (en) * 2019-12-11 2021-12-14 上海赫千电子科技有限公司 Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN110689761B (en) * 2019-12-11 2021-10-29 上海赫千电子科技有限公司 Automatic parking method
CN111402215B (en) * 2020-03-07 2022-04-29 西南交通大学 Contact net insulator state detection method based on robust principal component analysis method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN106600612A (en) * 2016-12-27 2017-04-26 重庆大学 Damage identification and detection method for electric automobile before and after renting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹超等."基于模糊类别共生矩阵的纹理疵点检测方法".《中国图象图形学报》.2017,第12卷(第1期),全文. *

Also Published As

Publication number Publication date
CN113158728A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN106373426B (en) Parking stall based on computer vision and violation road occupation for parking monitoring method
CN109033950B (en) Vehicle illegal parking detection method based on multi-feature fusion cascade depth model
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN100449579C (en) All-round computer vision-based electronic parking guidance system
CN108389421B (en) Parking lot accurate induction system and method based on image re-identification
CN111898491B (en) Identification method and device for reverse driving of vehicle and electronic equipment
CN108765975B (en) Roadside vertical parking lot management system and method
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN107527017A (en) Parking space detection method and system, storage medium and electronic equipment
CN113158728B (en) Parking space state detection method based on gray level co-occurrence matrix
CN112085018A (en) License plate recognition system based on neural network
CN112560814A (en) Method for identifying vehicles entering and exiting parking spaces
CN110765900A (en) DSSD-based automatic illegal building detection method and system
CN115620259A (en) Lane line detection method based on traffic off-site law enforcement scene
CN115294791A (en) Intelligent traffic guidance system for smart city
CN115116033A (en) Processing method and computing device for parking space range
Bachtiar et al. Parking management by means of computer vision
CN112802333A (en) AI video analysis-based highway network safety situation analysis system and method
CN114627653B (en) 5G intelligent barrier gate management system based on binocular recognition
JP7203277B2 (en) Method and apparatus for monitoring vehicle license plate recognition rate and computer readable storage medium
CN112446293B (en) Video detection method for track pollution event of highway pavement
CN117253377B (en) Intelligent vehicle information recognition and detection system and method based on Internet of things
CN113591814B (en) Road cleanliness detection method and detection system based on dynamic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant