CN113743205A - Method for acquiring travel origin-destination information of bus passengers - Google Patents

Method for acquiring travel origin-destination information of bus passengers Download PDF

Info

Publication number
CN113743205A
CN113743205A CN202110867512.8A CN202110867512A CN113743205A CN 113743205 A CN113743205 A CN 113743205A CN 202110867512 A CN202110867512 A CN 202110867512A CN 113743205 A CN113743205 A CN 113743205A
Authority
CN
China
Prior art keywords
passengers
bus
passenger
face
getting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110867512.8A
Other languages
Chinese (zh)
Inventor
刘洪宇
赵新潮
王全军
陈振
左海旺
孙浩
王亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Tiamaes Technology Co ltd
Original Assignee
Zhengzhou Tiamaes Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Tiamaes Technology Co ltd filed Critical Zhengzhou Tiamaes Technology Co ltd
Priority to CN202110867512.8A priority Critical patent/CN113743205A/en
Publication of CN113743205A publication Critical patent/CN113743205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention belongs to the technical field of public transportation, and particularly relates to a method for acquiring travel origin-destination information of passengers in a bus. The invention is realized by combining a hardware system installed on a bus with a software system, wherein the hardware system comprises a front door camera, a rear door camera, a vehicle-mounted terminal and a server, the front door camera and the rear door camera are respectively connected with the vehicle-mounted terminal, and are respectively used for collecting the facial information of passengers on each station and on the bus, transmitting the obtained facial information of the passengers to the vehicle-mounted terminal, and carrying out face recognition, facial feature comparison and origin-destination information derivation through a software module installed on the vehicle-mounted terminal so as to accurately obtain the origin-destination information of the passengers on the bus. The method can effectively reduce time complexity, improve accuracy while realizing fast matching, has small data acquisition difficulty, is convenient to implement and is beneficial to popularization.

Description

Method for acquiring travel origin-destination information of bus passengers
Technical Field
The invention belongs to the technical field of public transportation, and particularly relates to a method for acquiring travel origin-destination information of passengers in a bus.
Background
The core of the operation of the public transportation system is passenger flow, and after the operation state of the vehicle is sensed, the passenger flow needs to be sensed, analyzed and predicted in order to comprehensively analyze and optimize the public transportation system. The urban public transport passenger flow characteristics can reflect the time and space distribution conditions of travel of urban residents by buses, can support research on the demand of urban residents on the buses, provides a basis for real-time scheduling and optimization of an urban public transport system, and is a foundation for upgrading of the urban public transport system. The method has great significance for detecting and predicting the bus passenger flow, and on one hand, the acquisition of the passenger flow condition of the bus passenger is the basis of the bus route optimization design; on the other hand, the passenger flow analysis and prediction results can provide the passenger with the congestion degree information in the bus together with the electronic stop board, and guide the passenger to select a proper route for traveling; moreover, the results of passenger flow analysis and prediction can provide data support for public transport enterprises in the aspects of route planning, vehicle dispatching, operation and the like.
Chen Xuan Mei et al put forward a method and a device for predicting travel origin-destination information of passengers between buses based on IC card record in the method and the device for predicting the travel origin-destination information of the passengers between buses based on the IC card record, and estimate the travel origin-destination information of the passengers, and Xianhui Xin et al put forward a method for estimating the travel origin-destination information of the buses based on AFC data in real time, and put forward card swiping data by using a bus AFC system in the article, and digs out the mapping relation between the bus passenger flow and the station passenger flow, constructs a Kalman filtering real-time estimation model of the bus passenger travel origin-destination information, realizes the real-time estimation of the bus passenger travel origin-destination information, and provides a method for calculating the back-push weight by adopting a particle swarm algorithm in the article of 'a bus travel od matrix back-push combination method', and finally adopting a scheme of predicting the travel origin-destination information back-deducing matrix of the bus travel passenger by adopting a BP neural network.
The students mostly adopt data such as IC card swiping information and bills when carrying out travel origin-destination information pair identification on the public transport passengers, so that most of passenger flow perception of the public transport at present is based on the data such as the IC card swiping information and the bills, such as Beijing, the travel origin-destination information of the passengers is deduced by adopting a mode of twice swiping the cards on and off the bus and combining other data in a background, but the mode is difficult to popularize, and only once swiping the cards (swiping the cards on the bus) is carried out in many places. In addition, the data quality of the method is not complete passenger travel origin-destination information, because the IC card only has one time information and needs to combine with other data in the background to deduce the spatial information, and the accuracy of the data is difficult to guarantee. In addition, along with the development of science and technology and the popularization of mobile payment, great impact is formed on the scheme of carrying out passenger travel origin-destination information inference based on the IC card.
Disclosure of Invention
The invention provides a method for acquiring origin-destination information of a bus passenger travel, aiming at the defects and problems that the conventional method for acquiring origin-destination information of bus passenger flow has incomplete information acquisition and difficult data accuracy guarantee.
The technical scheme adopted by the invention for solving the technical problems is as follows: a bus passenger travel origin-destination information acquisition method is realized by combining a hardware system installed on a bus with a software system, wherein the hardware system comprises a front door camera installed on a front door of the bus, a rear door camera installed on a rear door of the bus, a vehicle-mounted terminal and a server; the software system comprises a face recognition module, a feature analysis module and a start-end information derivation module, wherein the face recognition module, the feature analysis module and the start-end information derivation module are respectively used for carrying out face recognition, face feature comparison and start-end information derivation on collected passenger face images, so that the start-end information of bus passengers is accurately acquired.
According to the method for acquiring the travel origin-destination information of the bus passengers, the front door camera is installed at the position, right above the driver seat, of the upper vehicle door, and the rear door camera is installed at the position, right above the lower vehicle door, of the vehicle top, of the lower vehicle door.
According to the method for acquiring the travel origin-destination information of the bus passengers, the front door camera and the rear door camera are IPC cameras.
The method for acquiring the travel origin-destination information of the bus passengers comprises the following steps:
step one, respectively acquiring a site S by a front door camera and a rear door cameraiAnd i is 1,2, … n (n is the total number of bus stops), and transmits image information of the passengers' stations for getting on and off and each passenger for getting on and off to the vehicle-mounted terminal.
Step two, processing the image information transmitted by the front and rear door cameras by a human body recognition algorithm carried by the vehicle-mounted terminal, wherein the processing method comprises the following steps:
s1, inputting image information of passengers acquired by a front door camera and a rear door camera into a neural network framework YOLOv3 to obtain a characteristic diagram, after 5 layers of convolution are carried out on the characteristic diagram, carrying out convolution operation and upsampling on one branch respectively, and carrying out channel merging on the obtained characteristic diagram and an upper layer of characteristic diagram; the other branch directly outputs a prediction result through two-layer convolution; then obtaining the class probability of the boundary frame and the sample through the convolution layer 1x 1;
s2, calculating the class probability of the image by adopting the target loss function of the cross entropy
Figure BDA0003187914510000041
In the formula: l represents a category summaryRate; y isiA label representing a sample i, the positive class being 1 and the negative class being 0; p is a radical ofiPredicting the probability of being a positive class for sample i;
s3, compressing and cutting the face image, and cutting out the face detection scale larger than 80 x 80;
s4, the still face image is excluded, and the face image having the positional deviation is selected as the target face image.
Step three, comparing the collected face information according to 68 face feature points by adopting multi-model adaboost cascade, and constructing local features for each face feature key point, wherein the method specifically comprises the following steps:
s1, aligning the target face image to be close to the standard shape;
s2, PCA processing is carried out on the shape characteristics of the aligned images to obtain the images with reduced dimensions
S3, comparing each feature point of the face after dimensionality reduction with the standard face 68 point information graph, correcting the current face feature point to obtain the position of each key point in the current image so as to construct a local feature graph;
step four, obtaining a candidate set of a getting-off station and a candidate set of a getting-off station corresponding to each getting-on passenger according to the local feature map;
step five, classifying according to the corresponding get-off passenger candidate set of the get-on passenger at each station and the facial features to respectively obtain the feature V of the get-on passenger1={u1,u2,…ui,…unAnd get-off passenger characteristics V2={d1,d2,…dj,…dm}; and respectively generating bipartite graphs by taking the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus as vertexes, and performing an breadth search algorithm and a depth-first search algorithm to obtain the passengers getting off the bus and the stations getting off the bus corresponding to all the passengers getting on the bus.
According to the method for acquiring the travel origin-destination information of the bus passengers, PCA processing is performed on the aligned image shape characteristics, and the processing method comprises the following steps:
(1) forming a matrix X by the image shape characteristics according to columns;
(2) zero-averaging each row of the matrix X;
(3) calculating a covariance matrix of the matrix X;
(4) calculating an eigenvector and an eigenvalue of the covariance matrix;
(5) and sorting the eigenvectors according to the corresponding eigenvalues, and finally obtaining the data after dimensionality reduction by taking a matrix formed by the first K rows.
In the method for acquiring the travel origin-destination information of the bus passengers, the method for acquiring the candidate sets of the get-off passengers and the get-off stops in the fourth step is as follows:
s1, acquiring facial features u of the passengers iiAnd the boarding station number S of the passengeri
S2, if i is less than n, acquiring the facial feature d of the get-off passenger jjAnd the passenger' S departure station number SqJudging whether j is smaller than m;
(1) if j is smaller than m, calculating the similarity cos (i, j) between i and j, traversing all the get-off passengers (j +1), calculating the maximum similarity max (cos (i, j)), and entering the step 3;
Figure BDA0003187914510000051
Figure BDA0003187914510000052
wherein
Figure BDA0003187914510000053
Judging whether the current boarding passenger and the current alighting passenger are the same person or not by function representation, and considering the two passengers as the same person when the cosine distance of the facial features of the two passengers is more than 0.7, otherwise, considering the two passengers as not the same person; the f (i, j) function is mainly used for taking the information of getting on and off the station of the same passenger from the database;
(2) if j is more than or equal to m, returning to the step 1;
and S3, traversing all the boarding passengers (i +1), and circulating the steps S1-S3 until all matching is completed to obtain the alighting passenger station candidate set corresponding to all the boarding passengers.
In the method for obtaining travel origin-destination information of the bus passengers, in the fifth step, according to the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus, a bipartite graph is generated by respectively taking the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus as vertexes, and the vertexes are divided into two disjoint parts, namely the characteristics of the passengers getting off the bus and the characteristics of the passengers getting on the bus, so that two endpoints of any one edge in the graph are respectively in the characteristics of the passengers getting off the bus and the characteristics of the passengers getting on the bus, and the method comprises the following steps:
(1) performing an extent search algorithm (bfs) to determine V1A hierarchy of midpoints;
(2) from V1In the random, a point is initialized and summed with V2All unmatched point connections in the list;
(3) measuring the similarity between the facial features of the ith boarding passenger and the facial features of the jth alighting passenger by adopting cosine distance to further obtain the length of an augmentation road;
(4) when V is searched1The level of the middle point is greater than the V on the shortest augmentation road1The direct exit can be realized when the number of the midpoints is large;
(5) from all V1And (3) sending out points which are not matched for depth-first search (dfs), traversing according to a hierarchy established in the breadth search algorithm (bfs) during the depth-first search, and returning to the step (1) until all augmentation ways are found.
According to the bus passenger travel origin-destination information acquisition method, the standard shape refers to a human face which accords with 68 characteristic points of the face, and meanwhile, an image with partial facial characteristics covered is removed.
The invention has the beneficial effects that: according to the invention, a mode of combining hardware and software is adopted, facial images of passengers getting on and off the bus at different stations are acquired through the hardware, a YOLOv3 deep neural network model is selected, and the model is cut and trained to obtain a face recognition algorithm with high recognition precision, so that a large number of non-faces are eliminated, and the recognition accuracy is improved; and then, taking 68 characteristic points of the human face as characteristics, establishing a mapping function from the face image to the face characteristics by adopting an adaboost cascade mode to obtain the characteristics of the passengers getting on the vehicle and the characteristics of the passengers getting off the vehicle, and searching and matching the characteristics of the passengers getting on the vehicle and the characteristics of the passengers getting off the vehicle by adopting a bipartite graph algorithm by taking the characteristics of the passengers getting on the vehicle and the characteristics of the passengers getting off the vehicle as vertexes until all the characteristics of the passengers getting on the vehicle correspond to the characteristics of the passengers getting off the vehicle, thereby completing the acquisition of the origin-value information of the passengers on the whole line. The method can obviously improve the face recognition precision and the accuracy; the bipartite graph can effectively reduce time complexity, further improve calculation efficiency and quickly complete feature matching; the corresponding getting-off station can be accurately deduced for each getting-on passenger, the precision is higher compared with a deduction algorithm based on multisource data such as an IC card, meanwhile, the acquisition difficulty is lower compared with multisource card swiping data, and the implementation difficulty is relatively lower.
Drawings
FIG. 1 is an overall flow chart of the present invention.
FIG. 2 is a flow chart of the framework of the present invention.
Fig. 3 is a flowchart of travel origin-destination information analysis.
Fig. 4 is a schematic diagram of passenger feature matching.
Detailed Description
Aiming at the problems of incomplete information acquisition and low accuracy existing in the prior art of acquiring OD information according to an IC card, the invention provides a bus passenger origin-destination information acquisition method capable of comprehensively and accurately acquiring passenger origin-destination information. The invention is further illustrated with reference to the following figures and examples.
A bus passenger travel origin-destination information acquisition method is realized by combining a hardware system installed on a bus with a software system, wherein the hardware system comprises a front door camera installed at the front door of the bus, a rear door camera installed at the rear door of the bus, a vehicle-mounted terminal and a server, the front door camera and the rear door camera are respectively connected with the vehicle-mounted terminal, the front door camera and the rear door camera are respectively used for acquiring facial information of passengers on each station and transmitting the acquired facial information of the passengers to the vehicle-mounted terminal, and the software system comprises a face recognition module, a feature analysis module and an origin-destination information derivation module which are respectively used for carrying out face recognition, face feature comparison and origin-destination information derivation on the acquired facial images of the passengers so as to accurately acquire origin-destination information of the bus passengers.
The front door camera and the rear door camera used in the invention are IPC cameras TM9601, the cameras realize face recognition based on infrared cameras and lens information, face feature extraction is embedded into the front section and is responsible for face snapshot and feature extraction, and feature information interaction with the vehicle-mounted machine is realized through a wireless module or a network cable; if the same target is captured for multiple times, one of the targets with higher definition, larger scale and most comprehensive human face features is screened out and transmitted to the vehicle-mounted equipment. The used vehicle-mounted machine is a TM8731 vehicle-mounted machine, the vehicle-mounted machine is connected with the background server through a wireless network module or a network cable, meanwhile, the vehicle-mounted machine is connected with the IPC camera, stores and manages passenger face data, matches face characteristic vectors of front and rear doors through comparison, and generates passenger travel origin-destination information by combining station space information.
The method mainly comprises the following steps:
step one, hardware installation
The front door camera is arranged at the position over the driver seat and opposite to the upper vehicle door, and the rear door camera is arranged at the position over the lower vehicle door and opposite to the lower vehicle door.
Step two, face recognition
Station S is respectively acquired by front door camera and rear door camerai1,2, … n (n is the total number of bus stops), and transmits image information of the passengers' stations and each of the passengers to the vehicle-mounted terminal;
secondly, the human body recognition algorithm carried by the vehicle-mounted terminal processes the image information transmitted by the front and rear door cameras, and the processing method comprises the following steps:
1. inputting image information of passengers acquired by a front door camera and a rear door camera into a neural network framework YOLOv3 to obtain a feature map, after 5-layer convolution is carried out on the feature map, carrying out convolution operation and up-sampling on one branch respectively, and carrying out channel merging on the obtained feature map and an upper-layer feature map; the other branch directly outputs a prediction result through two-layer convolution; then obtaining the class probability of the boundary frame and the sample through the convolution layer 1x 1;
2. calculating the class probability of the image by adopting an objective loss function of cross entropy,
Figure BDA0003187914510000091
in the formula: l represents a class probability; y isiA label representing a sample i, the positive class being 1 and the negative class being 0; p is a radical ofiThe probability of being a positive class is predicted for sample i.
3. Compressing and cutting the face image, and cutting out a face detection scale larger than 80 x 80;
4. excluding the still face image, the face image having the positional deviation is selected as the target face image.
Step three, comparing the collected face information according to 68 face feature points by adopting multi-model adaboost cascade, and constructing local features for each face feature key point, wherein the method specifically comprises the following steps:
1. aligning a target face image to enable the target face image to be close to a standard shape, wherein the standard shape refers to a human face which accords with 68 feature points of the face, and simultaneously removing an image with covered face part features;
2. performing PCA processing on the aligned image shape characteristics, wherein the processing method comprises the following steps:
(1) forming a matrix X by the image shape characteristics according to columns;
(2) zero-averaging each row of the matrix X;
(3) calculating a covariance matrix of the matrix X;
(4) calculating an eigenvector and an eigenvalue of the covariance matrix;
(5) and sorting the eigenvectors according to the corresponding eigenvalues, and finally obtaining the data after dimensionality reduction by taking a matrix formed by the first K rows.
3. Comparing each feature point of the current face with the information graph of the 68 points of the standard face, correcting the feature points of the current face to obtain the positions of each key point in the current image so as to construct a local feature graph;
step four, obtaining a candidate set of a getting-off station and a candidate set of a getting-off station corresponding to each getting-on passenger according to the local feature map, wherein the method comprises the following steps:
the number of passengers getting on the bus in the line is n, the number of passengers getting off the bus is m, (the number of the passengers getting on the bus and the number of the passengers getting off the bus in the line are consistent), and the basic information of each passenger getting on the bus comprises the facial features u of the passengers getting on the busiAnd the boarding station number S of the passengeri(ii) a The basic information of each getting-off passenger includes facial features d of the getting-off passengerjAnd the passenger' S departure station number Sj
1. Obtaining facial features u of a boarding passenger iiAnd the boarding station number S of the passengeri
2. If i is less than n, acquiring the facial feature d of the get-off passenger jjAnd the passenger' S departure station number SqJudging whether j is smaller than m;
(1) if j is smaller than m, calculating the similarity cos (i, j) between i and j, traversing all the get-off passengers (j +1), calculating the maximum similarity max (cos (i, j)), and entering the step 3;
Figure BDA0003187914510000111
Figure BDA0003187914510000112
wherein
Figure BDA0003187914510000113
Judging whether the current boarding passenger and the current alighting passenger are the same person or not by function representation, and considering the two passengers as the same person when the cosine distance of the facial features of the two passengers is more than 0.7, otherwise, considering the two passengers as not the same person; the f (i, j) function is mainly used to retrieve the information of getting on and off the station of the same passenger from the database.
(2) If j is larger than or equal to m, returning to the step 1.
3. And (4) traversing all the boarding passengers (i +1), and circulating the steps 1-4 until all matching is completed to obtain the off-board passenger station candidate sets corresponding to all the boarding passengers.
Step five, classifying according to the corresponding get-off passenger candidate set of the get-on passenger at each station and the facial features to respectively obtain the feature V of the get-on passenger1={u1,u2,…ui,…unAnd get-off passenger characteristics V2={d1,d2,…dj,…dm}
In the formula: v1Indicating the boarding passenger characteristics, uiFacial features representing an boarding passenger; v2Indicating characteristics of passengers getting off, djA facial feature representing a alighting passenger;
according to the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus, a bipartite graph is generated by respectively taking the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus as vertexes, and the vertex set is divided into two disjoint parts V2(lower passenger characteristics) and V1(getting-on-vehicle passenger characteristics) by making two end points of any one side in the figure respectively at V2And V1And obtaining the getting-off passengers and the getting-off stations corresponding to all the getting-on passengers, wherein the matching of the bipartite graph refers to a subset E 'of the edge set E, so that any two edges in the edge set E' do not share a vertex. The method specifically comprises the following steps:
(1) performing an extent search algorithm (bfs) to determine V1A hierarchy of midpoints;
(2) from V1In the random, a point is initialized and summed with V2All unmatched point connections in the list;
(3) measuring the similarity between the facial features of the ith boarding passenger and the facial features of the jth alighting passenger by adopting cosine distance to further obtain the length of an augmentation road;
(4) when V is searched1The level of the middle point is greater than the V on the shortest augmentation road1The direct exit can be realized when the number of the midpoints is large;
(5) from all V1Sending out points which are not matched with the points to perform depth-first search (dfs), traversing according to a hierarchy established in the breadth search algorithm (bfs) during depth-first search, and returning to the step (1) until all augmentation paths are found。
The time complexity of the method is
Figure BDA0003187914510000121
Compared with an exhaustion method, the method can greatly shorten the calculation speed of the facial feature matching of passengers getting on or off the bus, and can quickly complete the feature matching; the bipartite graph algorithm can accurately deduce the corresponding getting-off station for each getting-on passenger, and compared with a derivation algorithm based on multi-source data such as an IC card, the derivation algorithm has higher precision, and compared with the derivation algorithm based on multi-source card swiping data, the derivation algorithm has lower acquisition difficulty and lower implementation difficulty.

Claims (8)

1. A method for acquiring travel origin-destination information of a bus passenger is characterized by comprising the following steps: the system is realized by combining a hardware system installed on a bus with a software system, wherein the hardware system comprises a front door camera installed on a front door of the bus, a rear door camera installed on a rear door of the bus, a vehicle-mounted terminal and a server; the software system comprises a face recognition module, a feature analysis module and a start-end information derivation module, wherein the face recognition module, the feature analysis module and the start-end information derivation module are respectively used for carrying out face recognition, face feature comparison and start-end information derivation on collected passenger face images, so that the start-end information of bus passengers is accurately acquired.
2. The method for acquiring travel origin-destination information of bus passengers according to claim 1, characterized in that: the front door camera is arranged at a position over the driver seat and opposite to the upper vehicle door, and the rear door camera is arranged at a position over the lower vehicle door and opposite to the lower vehicle door.
3. The bus passenger travel origin-destination information acquisition method according to claim 1 or 2, characterized in that: the front door camera and the rear door camera are IPC cameras.
4. The method for acquiring travel origin-destination information of bus passengers according to claim 1, characterized in that: the origin-destination information acquisition method comprises the following steps:
step one, respectively acquiring a site S by a front door camera and a rear door camerai1,2, … n (n is the total number of bus stops), and transmits image information of the passengers' stations and each of the passengers to the vehicle-mounted terminal;
step two, processing the image information transmitted by the front and rear door cameras by a human body recognition algorithm carried by the vehicle-mounted terminal, wherein the processing method comprises the following steps:
s1, inputting image information of passengers acquired by a front door camera and a rear door camera into a neural network framework YOLOv3 to obtain a characteristic diagram, after 5 layers of convolution are carried out on the characteristic diagram, carrying out convolution operation and upsampling on one branch respectively, and carrying out channel merging on the obtained characteristic diagram and an upper layer of characteristic diagram; the other branch directly outputs a prediction result through two-layer convolution; then obtaining the class probability of the boundary frame and the sample through the convolution layer 1x 1;
s2, calculating the class probability of the image by adopting the target loss function of the cross entropy
Figure FDA0003187914500000021
In the formula: l represents a class probability; y isiA label representing a sample i, the positive class being 1 and the negative class being 0; p is a radical ofiPredicting the probability of being a positive class for sample i;
s3, compressing and cutting the face image, and cutting out the face detection scale larger than 80 x 80;
s4, excluding the still face image, selecting the face image with the position offset as the target face image;
step three, comparing the collected face information according to 68 face feature points by adopting multi-model adaboost cascade, and constructing local features for each face feature key point, wherein the method specifically comprises the following steps:
s1, aligning the target face image to be close to the standard shape;
s2, performing PCA (principal component analysis) processing on the aligned image shape characteristics to obtain a dimension-reduced image;
s3, comparing each feature point of the face after dimensionality reduction with the standard face 68 point information graph, correcting the current face feature point to obtain the position of each key point in the current image so as to construct a local feature graph;
step four, obtaining a candidate set of a getting-off station and a candidate set of a getting-off station corresponding to each getting-on passenger according to the local feature map;
step five, classifying according to the corresponding get-off passenger candidate set of the get-on passenger at each station and the facial features to respectively obtain the feature V of the get-on passenger1={u1,u2,…ui,…unAnd get-off passenger characteristics V2={d1,d2,…dj,…dm}; and respectively generating bipartite graphs by taking the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus as vertexes, and performing an breadth search algorithm and a depth-first search algorithm to obtain the passengers getting off the bus and the stations getting off the bus corresponding to all the passengers getting on the bus.
5. The method for acquiring travel origin-destination information of bus passengers as claimed in claim 4, wherein: performing PCA processing on the aligned image shape characteristics, wherein the processing method comprises the following steps:
(1) forming a matrix X by the image shape characteristics according to columns;
(2) zero-averaging each row of the matrix X;
(3) calculating a covariance matrix of the matrix X;
(4) calculating an eigenvector and an eigenvalue of the covariance matrix;
(5) and sorting the eigenvectors according to the corresponding eigenvalues, and finally obtaining the data after dimensionality reduction by taking a matrix formed by the first K rows.
6. The method for acquiring travel origin-destination information of bus passengers as claimed in claim 4, wherein: the method for acquiring the candidate sets of the get-off passengers and the get-off stations in the fourth step comprises the following steps:
s1, acquiring facial features u of the passengers iiAnd the boarding station number S of the passengeri
S2, if i is less than n, acquiring the facial feature d of the get-off passenger jjAnd the passenger' S departure station number SqJudging whether j is smaller than m;
(1) if j is smaller than m, calculating the similarity cos (i, j) between i and j, traversing all the get-off passengers (j +1), calculating the maximum similarity max (cos (i, j)), and entering the step 3;
Figure FDA0003187914500000041
Figure FDA0003187914500000042
wherein
Figure FDA0003187914500000043
Judging whether the current boarding passenger and the current alighting passenger are the same person or not by function representation, and considering the two passengers as the same person when the cosine distance of the facial features of the two passengers is more than 0.7, otherwise, considering the two passengers as not the same person; the f (i, j) function is mainly used for taking the information of getting on and off the station of the same passenger from the database;
(2) if j is more than or equal to m, returning to the step 1;
and S3, traversing all the boarding passengers (i +1), and circulating the steps S1-S3 until all matching is completed to obtain the alighting passenger station candidate set corresponding to all the boarding passengers.
7. The method for acquiring travel origin-destination information of bus passengers as claimed in claim 4, wherein: in the fifth step, according to the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus, a bipartite graph is generated by respectively taking the characteristics of the passengers getting on the bus and the characteristics of the passengers getting off the bus as vertexes, and the vertex set is divided into two non-intersected parts, namely the characteristics of the passengers getting off the bus and the characteristics of the passengers getting on the bus, so that two endpoints of any one edge in the graph are respectively in the characteristics of the passengers getting off the bus and the characteristics of the passengers getting on the bus, and the method comprises the following steps:
(1) performing an extent search algorithm (bfs) to determine V1A hierarchy of midpoints;
(2) from V1In the random, a point is initialized and summed with V2All unmatched point connections in the list;
(3) measuring the similarity between the facial features of the ith boarding passenger and the facial features of the jth alighting passenger by adopting cosine distance to further obtain the length of an augmentation road;
(4) when V is searched1The level of the middle point is greater than the V on the shortest augmentation road1The direct exit can be realized when the number of the midpoints is large;
(5) from all V1And (3) sending out points which are not matched for depth-first search (dfs), traversing according to a hierarchy established in the breadth search algorithm (bfs) during the depth-first search, and returning to the step (1) until all augmentation ways are found.
8. The method for acquiring travel origin-destination information of bus passengers as claimed in claim 4, wherein: the standard shape refers to a human face which is matched with 68 feature points of the face, and meanwhile, an image with the covered face part features is removed.
CN202110867512.8A 2021-07-30 2021-07-30 Method for acquiring travel origin-destination information of bus passengers Pending CN113743205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110867512.8A CN113743205A (en) 2021-07-30 2021-07-30 Method for acquiring travel origin-destination information of bus passengers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110867512.8A CN113743205A (en) 2021-07-30 2021-07-30 Method for acquiring travel origin-destination information of bus passengers

Publications (1)

Publication Number Publication Date
CN113743205A true CN113743205A (en) 2021-12-03

Family

ID=78729491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110867512.8A Pending CN113743205A (en) 2021-07-30 2021-07-30 Method for acquiring travel origin-destination information of bus passengers

Country Status (1)

Country Link
CN (1) CN113743205A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN109508700A (en) * 2018-12-28 2019-03-22 广州粤建三和软件股份有限公司 A kind of face identification method, system and storage medium
CN110516600A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of bus passenger flow detection method based on Face datection
CN111027350A (en) * 2018-10-10 2020-04-17 成都理工大学 Improved PCA algorithm based on human face three-dimensional reconstruction
CN111311467A (en) * 2020-02-11 2020-06-19 罗普特科技集团股份有限公司 Bus route prediction method and system based on face recognition
KR20210037925A (en) * 2019-09-30 2021-04-07 주식회사 씨엘 Passenger counting apparatus using computer vision and passenger monitoring system thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678984A (en) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 Method for achieving user authentication by utilizing camera
CN111027350A (en) * 2018-10-10 2020-04-17 成都理工大学 Improved PCA algorithm based on human face three-dimensional reconstruction
CN109508700A (en) * 2018-12-28 2019-03-22 广州粤建三和软件股份有限公司 A kind of face identification method, system and storage medium
CN110516600A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of bus passenger flow detection method based on Face datection
KR20210037925A (en) * 2019-09-30 2021-04-07 주식회사 씨엘 Passenger counting apparatus using computer vision and passenger monitoring system thereof
CN111311467A (en) * 2020-02-11 2020-06-19 罗普特科技集团股份有限公司 Bus route prediction method and system based on face recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杜启亮 等: "基于视频监控的手扶电梯乘客异常行为识别", 《华南理工大学学报(自然科学版)》, vol. 48, no. 8, 15 August 2020 (2020-08-15), pages 10 - 21 *

Similar Documents

Publication Publication Date Title
WO2017156772A1 (en) Method of computing passenger crowdedness and system applying same
CN110487562A (en) One kind being used for unpiloted road-holding ability detection system and method
CN111553201B (en) Traffic light detection method based on YOLOv3 optimization algorithm
CN110033002A (en) Detection method of license plate based on multitask concatenated convolutional neural network
CN107491720A (en) A kind of model recognizing method based on modified convolutional neural networks
CN108009690B (en) Ground bus stealing group automatic detection method based on modularity optimization
CN108806248B (en) Vehicle travel track division method for RFID electronic license plate data
CN113989851B (en) Cross-modal pedestrian re-identification method based on heterogeneous fusion graph convolution network
CN104239905A (en) Moving target recognition method and intelligent elevator billing system having moving target recognition function
CN114241053B (en) Multi-category tracking method based on improved attention mechanism FairMOT
CN113139512B (en) Depth network hyperspectral image classification method based on residual error and attention
CN106934380A (en) A kind of indoor pedestrian detection and tracking based on HOG and MeanShift algorithms
CN103971106A (en) Multi-view human facial image gender identification method and device
CN106710228B (en) A kind of implementation method of passenger-cargo shunting traffic parameter monitoring system
Hasegawa et al. Type classification, color estimation, and specific target detection of moving targets on public streets
CN113989179B (en) Train wheel set tread defect detection method and system based on target detection algorithm
CN112991399B (en) Bus passenger number detection system based on RFS
CN110781828A (en) Fatigue state detection method based on micro-expression
CN118031913A (en) Unmanned aerial vehicle survey and drawing data processing device
Maglietta et al. The promise of machine learning in the Risso’s dolphin Grampus griseus photo-identification
CN112700473B (en) Carriage congestion degree judging system based on image recognition
CN111639672B (en) Deep learning city function classification method based on majority voting
CN117292322A (en) Deep learning-based personnel flow detection method and system
CN105825215A (en) Instrument positioning method based on local neighbor embedded kernel function and carrier of method
CN113743205A (en) Method for acquiring travel origin-destination information of bus passengers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination