CN109344776B - Data processing method - Google Patents

Data processing method Download PDF

Info

Publication number
CN109344776B
CN109344776B CN201811172598.7A CN201811172598A CN109344776B CN 109344776 B CN109344776 B CN 109344776B CN 201811172598 A CN201811172598 A CN 201811172598A CN 109344776 B CN109344776 B CN 109344776B
Authority
CN
China
Prior art keywords
face
matching degree
face picture
data
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811172598.7A
Other languages
Chinese (zh)
Other versions
CN109344776A (en
Inventor
张德兆
王肖
霍舒豪
李晓飞
张放
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201811172598.7A priority Critical patent/CN109344776B/en
Publication of CN109344776A publication Critical patent/CN109344776A/en
Application granted granted Critical
Publication of CN109344776B publication Critical patent/CN109344776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a data processing method, which comprises the following steps: the control unit acquires environmental perception data around the vehicle acquired by the acquisition device; the context awareness data includes location information of the vehicle; processing the environmental perception data, and acquiring human face characteristics according to a processing result; calculating a first matching degree of the face features and each face picture in a face picture set in a first database; determining the face picture with the first matching degree larger than a preset first matching degree threshold value as a first face picture; sending the face features, the first face picture and the position information to a server; the server calculates the second matching degree of the face features and each face picture in the face picture set in the second database; determining the face picture with the second matching degree larger than a preset second matching degree threshold value as a second face picture; and sending the face features, the second face picture and the position information to a third-party server. Therefore, the data of the unmanned equipment can be utilized, and the urban security and protection cost can be saved.

Description

Data processing method
Technical Field
The invention relates to the technical field of security and protection, in particular to a data processing method for environment perception data of unmanned equipment in an operation process.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face.
In the prior art, in order to perform security protection, face recognition is often performed through arranging a camera and data collected by the camera, so that abnormal personnel are recognized. However, the method has the defects of huge cost, dead angle monitoring and the like.
The unmanned equipment senses the road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls a vehicle to reach a preset target.
The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
Existing unmanned vehicles generate a large amount of data during walking, and the data is only used for evaluating the performance of the unmanned vehicles but has no other purpose.
Therefore, how to develop a reasonable mode, which can not only utilize the data of the unmanned device, but also save the cost of city security is a problem to be solved urgently.
Disclosure of Invention
An embodiment of the present invention provides a data processing method to solve the problems in the prior art.
In order to solve the above problem, the present invention provides a data processing method, including:
the control unit acquires environmental perception data around the vehicle acquired by the acquisition device; the context awareness data comprises location information of the vehicle;
the control unit processes the environmental perception data and obtains human face features according to the processing result;
the control unit calculates a first matching degree of the face features and each face picture in a face picture set in a first database;
the control unit determines that the face picture with the first matching degree larger than a preset first matching degree threshold value is a first face picture;
the control unit sends the face features, the first face picture and the position information to a server;
the server calculates the second matching degree of the face features and each face picture in a face picture set in a second database;
the server determines that the face picture with the second matching degree larger than a preset second matching degree threshold value is a second face picture;
and the server sends the face features, the second face picture and the position information to a third-party server.
In one possible implementation, the environmental awareness data includes laser point cloud data and video data;
the control unit processes the environmental perception data, and according to a processing result, acquiring human face features specifically comprises:
segmenting and tracking the laser point cloud data to obtain a point cloud segmentation result;
processing the point cloud segmentation result to obtain a first face feature;
identifying a face region in the video data through a face detection algorithm;
extracting a second face feature from the face region through face feature extraction;
and on a time axis, correcting the second human face features through the first human face features to obtain the human face features.
In a possible implementation manner, before the server sends the facial features, the second facial picture, and the location information to a third-party server, the method further includes:
when the second matching degree is larger than a preset second matching degree threshold value, generating alarm information;
and sending the alarm information to a third-party server.
In one possible implementation, the method further includes, after the step of:
the server sends the second face picture to the control unit;
and the control unit updates the first database according to the second face picture.
In one possible implementation, the method further includes:
when the second matching degree is not larger than a preset second matching degree threshold value, generating recording information; the recording information includes a recording time.
In one possible implementation, the control unit reads position information obtained by a global positioning system on the vehicle.
By applying the data processing method provided by the invention, the environment perception data generated by the unmanned equipment is utilized to carry out matching twice, after the matching is successful for two times, the alarm is given to the third-party server, and meanwhile, the position information when the environment perception data is collected is sent, so that the third-party server can conveniently carry out the work according to the position information.
Drawings
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present invention. The data processing method is applied to the field of unmanned driving, in particular to unmanned vehicles, and particularly to unmanned vehicles in cities (non-closed-loop parks). Therefore, data of the unmanned equipment can be utilized, and cost of city security can be saved.
As shown in fig. 1, the method comprises the steps of:
101, a control unit acquires environmental perception data around a vehicle acquired by an acquisition device; the context awareness data includes location information of the vehicle.
Specifically, in one example, the unmanned vehicle is provided with a collection device and a control unit, the control unit is a data processing center of the unmanned vehicle, and path planning is performed according to environment perception data collected by the collection device, so that automatic driving is performed.
The collection device can gather environmental perception data on the one hand, and on the other hand, can gather the positional information of vehicle. The acquisition device when gathering environmental perception data includes but not limited to lidar and camera, and lidar is on-vehicle lidar, utilizes the propagation velocity of laser fast, and the good characteristics of linear type are gone out laser emission, and the information of receiving the return describes the surface morphology of measurand object. The collection device for collecting the position information of the vehicle includes, but is not limited to, a Global Positioning System (GPS).
By way of example and not limitation, the number of lidar may be two, one located in the front of the vehicle and the other located in the rear of the vehicle. The number of the cameras can be 4, and the cameras are respectively arranged at the left front, the right front, the left rear and the right rear of the vehicle. Therefore, the accuracy of obstacle recognition in vehicle operation is improved, and meanwhile, the accuracy of the extracted human face features is also ensured, for example, for an image of a person, in the driving process, first video data containing the human face can be acquired through the front left camera, and second video data containing the human face is acquired through the rear left camera.
The environment perception data comprises laser point cloud data collected by a laser radar and video data collected by a camera. In another example, the control unit may be in a server of the vehicle. At this point, the environment awareness data may be sent to the server, which performs the first matching. At this time, the subsequent server may be regarded as a cloud server.
And 102, processing the environmental perception data by the control unit, and acquiring the human face characteristics according to the processing result.
Specifically, the environmental perception data includes laser point cloud data and video data, and the step 102 includes the following steps:
firstly, segmenting and tracking laser point cloud data to obtain a point cloud segmentation result;
then, processing a point cloud segmentation result to obtain a first face feature;
then, identifying a face area in the video data through a face detection algorithm;
then, extracting a second face feature from the face region through face feature extraction;
and finally, on a time axis, correcting the second face features through the first face features to obtain the face features.
And a correction process, namely judging whether the point cloud segmentation and the tracking object are matched with the feature recognition object or not, for example, identifying the object as a pedestrian in the point cloud segmentation and tracking result, identifying the object as the pedestrian by the face feature recognition, matching the recognition results of the object by the point cloud segmentation and the tracking object, and enhancing or supplementing the second face feature by using the first face feature when the point cloud segmentation and the tracking object are matched with the recognition results of the object. If the object is identified as a pedestrian in the point cloud segmentation and tracking result, and the object is also identified as a vehicle in the face feature identification and tracking result, the identification results of the two are not matched.
When the two are matched, the image containing the human face features is enhanced by utilizing algorithms such as detail enhancement and the like, so that the human face features are extracted.
When using video data of several cameras, a face detection algorithm may be used to detect the face region in each video data. And for each face region, extracting a corresponding second face feature by using a face detection algorithm. And for a plurality of second face features, the existing algorithm can be utilized to remove or fuse the second face features.
Step 103, the control unit calculates a first matching degree between the face features and each face picture in the face picture set in the first database.
The first database is a database in the control unit, and in the first database, a face picture of an abnormal person, an abnormal person such as a missing person, and the like are stored. The first database may be downloaded, for example, from a server. Further, the server may obtain the first database from a third party server.
The process is a face comparison process by computing a first match of a face feature and a number of pictures.
And step 104, the control unit determines that the face picture with the first matching degree larger than a preset first matching degree threshold value is a first face picture.
Continuing with the previous example, when the matching degree with a certain picture is greater than the first matching degree threshold, for example, 85%, the picture is determined to be the first face picture.
It is understood that the number of the first face pictures at this time may be more than 1.
And 105, the control unit sends the face features, the first face picture and the position information to a server.
The server can be a server of the unmanned vehicle, and accuracy of identification is improved by performing secondary identification.
The server may communicate with the control unit through the fourth Generation communication system (4 g), the fifth Generation communication system (5 g), wireless Fidelity (WI-FI), bluetooth, or the like, which is not limited in this application.
Step 106, the server calculates a second matching degree between the face features and each face picture in the face picture set in the second database.
The second database stores a larger number of pictures than the first database.
Step 107, the server determines that the face picture with the second matching degree greater than the preset second matching degree threshold is the second face picture.
And step 108, the server sends the face features, the second face picture and the position information to a third-party server.
The third party server may be a server of some organization, such as a management organization that manages the missing person. Therefore, when the unmanned vehicle runs, environment perception data can be utilized, when two times of matching are successful, position information, face features and a second face picture when abnormal data are collected can be sent to the third-party server, and the third-party server can conveniently utilize the information to perform security and protection work. Not only save
Further, step 108 is preceded by:
when the second matching degree is larger than a preset second matching degree threshold value, generating alarm information;
and sending the alarm information to a third-party server.
Specifically, when the face features, the second face picture and the position information are sent, alarm information can be sent to a third-party server, and therefore alarm reminding is conducted so that when the face features, the second face picture and the position information are obtained, positioning can be conducted quickly.
Further, step 108 is followed by:
the server sends the second face picture to the control unit;
and the control unit updates the first database according to the second face picture.
Thus, the first database can be updated to expand the range of the first database.
Further, the method further comprises:
when the second matching degree is not greater than a preset second matching degree threshold value, generating recording information; the recording information includes a recording time.
Therefore, when the first matching is successful and the second matching is unsuccessful, only recording can be carried out, and position information, face features, a second face picture and alarm information are not sent to a third-party server, so that the frequency of misinformation is reduced through two times of matching, and the accuracy of the whole mobile security is improved.
The data processing method is characterized in that the unmanned equipment is matched twice by using the generated environment perception data, and after the two matching are successful, the data processing method gives an alarm to a third-party server and simultaneously sends the position information when the environment perception data is collected, so that the third-party server can conveniently work according to the position information. The data of the unmanned equipment can be utilized, the utilization rate of the environmental sensation data is improved, and the urban security and protection cost can be saved.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. A data processing method, characterized in that the data processing method comprises:
the control unit acquires environmental perception data around the vehicle acquired by the acquisition device; the context awareness data includes location information of the vehicle;
the control unit processes the environmental perception data and acquires human face characteristics according to a processing result;
the control unit calculates a first matching degree of the face features and each face picture in a face picture set in a first database;
the control unit determines that the face picture with the first matching degree larger than a preset first matching degree threshold value is a first face picture;
the control unit sends the face features, the first face picture and the position information to a server;
the server calculates a second matching degree of the face features and each face picture in a face picture set in a second database;
the server determines that the face picture with the second matching degree larger than a preset second matching degree threshold value is a second face picture;
the server sends the face features, the second face picture and the position information to a third-party server;
wherein the environmental perception data comprises laser point cloud data and video data; the control unit processes the environmental perception data, and according to the processing result, acquiring the human face features specifically comprises:
segmenting and tracking the laser point cloud data to obtain a point cloud segmentation result;
processing the point cloud segmentation result to obtain a first face feature;
identifying a face region in the video data through a face detection algorithm;
extracting a second face feature from the face region through face feature extraction;
on a time axis, correcting the second human face features through the first human face features to obtain human face features;
and the correction is to judge whether the point cloud segmentation and the tracked object are matched with the feature recognition object, and when the point cloud segmentation and the tracked object are matched with the feature recognition object, the first face feature is utilized to enhance or supplement the second face feature.
2. The data processing method of claim 1, wherein before the server sends the facial features, the second facial picture and the location information to a third-party server, the method further comprises:
when the second matching degree is larger than a preset second matching degree threshold value, generating alarm information;
and sending the alarm information to a third-party server.
3. The data processing method of claim 1, further comprising, after the method:
the server sends the second face picture to the control unit;
and the control unit updates the first database according to the second face picture.
4. The data processing method of claim 1, wherein the method further comprises:
when the second matching degree is not larger than a preset second matching degree threshold value, generating recording information; the recording information includes a recording time.
5. The data processing method of claim 1, wherein the control unit reads position information obtained by a global positioning system on the vehicle.
CN201811172598.7A 2018-10-09 2018-10-09 Data processing method Active CN109344776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811172598.7A CN109344776B (en) 2018-10-09 2018-10-09 Data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811172598.7A CN109344776B (en) 2018-10-09 2018-10-09 Data processing method

Publications (2)

Publication Number Publication Date
CN109344776A CN109344776A (en) 2019-02-15
CN109344776B true CN109344776B (en) 2022-11-11

Family

ID=65308585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811172598.7A Active CN109344776B (en) 2018-10-09 2018-10-09 Data processing method

Country Status (1)

Country Link
CN (1) CN109344776B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140838A (en) * 2020-08-14 2022-03-04 华为技术有限公司 Image management method, device, terminal equipment and system
CN113591544A (en) * 2021-06-10 2021-11-02 东风汽车集团股份有限公司 Method, system and device for tracking user through vehicle and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354710A (en) * 2015-12-22 2016-02-24 重庆智韬信息技术中心 Auxiliary identity authentication method for face identification payment
CN106056075A (en) * 2016-05-27 2016-10-26 广东亿迅科技有限公司 Important person identification and tracking system in community meshing based on unmanned aerial vehicle
CN107247916A (en) * 2017-04-19 2017-10-13 广东工业大学 A kind of three-dimensional face identification method based on Kinect
CN108345868A (en) * 2018-03-09 2018-07-31 广东万峯信息科技有限公司 Public transport based on face recognition technology is pursued and captured an escaped prisoner system and its control method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946734B2 (en) * 2015-09-16 2018-04-17 Ekin Teknoloji Sanayi Ve Ticaret Anonim Sirketi Portable vehicle monitoring system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105354710A (en) * 2015-12-22 2016-02-24 重庆智韬信息技术中心 Auxiliary identity authentication method for face identification payment
CN106056075A (en) * 2016-05-27 2016-10-26 广东亿迅科技有限公司 Important person identification and tracking system in community meshing based on unmanned aerial vehicle
CN107247916A (en) * 2017-04-19 2017-10-13 广东工业大学 A kind of three-dimensional face identification method based on Kinect
CN108345868A (en) * 2018-03-09 2018-07-31 广东万峯信息科技有限公司 Public transport based on face recognition technology is pursued and captured an escaped prisoner system and its control method

Also Published As

Publication number Publication date
CN109344776A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN108571974B (en) Vehicle positioning using a camera
CN108345822B (en) Point cloud data processing method and device
CN110045729B (en) Automatic vehicle driving method and device
CN107563419B (en) Train positioning method combining image matching and two-dimensional code
CN109686031B (en) Identification following method based on security
CN111695546B (en) Traffic signal lamp identification method and device for unmanned vehicle
CN110046640B (en) Distributed representation learning for correlating observations from multiple vehicles
US10480949B2 (en) Apparatus for identifying position of own vehicle and method for identifying position of own vehicle
WO2018153211A1 (en) Method and apparatus for obtaining traffic condition information, and computer storage medium
CN110753953A (en) Method and system for object-centric stereo vision in autonomous vehicles via cross-modality verification
CN109682388B (en) Method for determining following path
CN110264495B (en) Target tracking method and device
CN109740461B (en) Object and subsequent processing method
CN108491782A (en) A kind of vehicle identification method based on driving Image Acquisition
KR101326943B1 (en) Overtaking vehicle warning system and overtaking vehicle warning method
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN111213153A (en) Target object motion state detection method, device and storage medium
US11837084B2 (en) Traffic flow estimation apparatus, traffic flow estimation method, traffic flow estimation program, and storage medium storing traffic flow estimation program
CN111881322B (en) Target searching method and device, electronic equipment and storage medium
US11508118B2 (en) Provisioning real-time three-dimensional maps for autonomous vehicles
CN112541416A (en) Cross-radar obstacle tracking method and device, electronic equipment and storage medium
CN109344776B (en) Data processing method
CN112434566A (en) Passenger flow statistical method and device, electronic equipment and storage medium
CN114333339B (en) Deep neural network functional module de-duplication method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Applicant after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Applicant before: Beijing Idriverplus Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant