CN111199184A - Portable three-dimensional imaging equipment for intelligent community patrol and use method - Google Patents

Portable three-dimensional imaging equipment for intelligent community patrol and use method Download PDF

Info

Publication number
CN111199184A
CN111199184A CN201911144353.8A CN201911144353A CN111199184A CN 111199184 A CN111199184 A CN 111199184A CN 201911144353 A CN201911144353 A CN 201911144353A CN 111199184 A CN111199184 A CN 111199184A
Authority
CN
China
Prior art keywords
patrol
community
information
scene
scene object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911144353.8A
Other languages
Chinese (zh)
Inventor
沈玺
罗洪燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Terminus Technology Co Ltd
Original Assignee
Chongqing Terminus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Terminus Technology Co Ltd filed Critical Chongqing Terminus Technology Co Ltd
Priority to CN201911144353.8A priority Critical patent/CN111199184A/en
Publication of CN111199184A publication Critical patent/CN111199184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application provides a portable three-dimensional imaging equipment for wisdom community patrols and guards, includes: the system comprises AR three-dimensional imaging glasses, a central processing unit, a wireless communication unit and a community patrol cloud server; the AR stereo imaging glasses are connected with the central processing unit and the wireless communication unit; the central processing unit is connected with the AR stereo imaging glasses; the wireless communication unit is connected with the community patrol cloud server; the community patrol cloud server is connected with the wireless communication unit. The utility model provides a portable three-dimensional imaging equipment for wisdom community patrols and guards, utilizes AR three-dimensional imaging technique to patrol and guards against information and superpose the demonstration at security personnel visual range's actual scene object periphery, can accurately know the relevant information in the community, manages behaviors such as occupying parking stall, has improved management and patrolling and guarding efficiency.

Description

Portable three-dimensional imaging equipment for intelligent community patrol and use method
Technical Field
The application relates to the technical field of community patrol, in particular to portable three-dimensional imaging equipment for smart community patrol and a using method.
Background
In the management of the community, patrol is implemented by security personnel at the access & exit and the inner space of the community, the identities of people coming in and going out of the community can be verified, suspicious characters can be found in time, old and young people are helped, irregular behaviors such as disorderly parking, occupation of private parking spaces, road blockage, fire fighting access and the like are corrected, and therefore the safety environment and civilized order inside the community are maintained.
However, the current community patrol depends on manual inspection, observation and inquiry of security guards, and on personal memory and working experience, so that the error rate is high, a lot of bugs exist, the efficiency is low, and a lot of time and manpower are spent.
Therefore, how to efficiently patrol the areas in the community is a problem to be urgently solved by the technical staff in the field.
Disclosure of Invention
In view of this, the purpose of the present application is to provide a portable stereo imaging device for smart community patrol, so as to solve the problems of low efficiency, high cost and high error rate in the prior art that the community patrol is performed manually.
In accordance with the above object, in a first aspect of the present application, there is provided a portable stereoscopic imaging device for smart community patrol, comprising: the system comprises AR three-dimensional imaging glasses, a central processing unit, a wireless communication unit and a community patrol cloud server;
the AR stereo imaging glasses are connected with the central processing unit and the wireless communication unit and are used for shooting video pictures in a visual area, sending the video pictures to the central processing unit and displaying patrol record data sent by the wireless communication unit;
the central processing unit is connected with the AR stereo imaging glasses and is used for receiving video pictures of the AR stereo imaging glasses and extracting relevant information from the video pictures;
the wireless communication unit is connected with the community patrol cloud server and is used for uploading relevant information of the central processing unit to the community patrol cloud server and sending patrol record data received from the community patrol cloud server to the AR three-dimensional imaging glasses for displaying;
the community patrol cloud server is connected with the wireless communication unit and used for inquiring community patrol records stored by the cloud server according to the relevant information of the central processing unit, acquiring relevant patrol record data and sending the patrol record data to the wireless communication unit.
In some embodiments, the mobile internet of things node device, the AR stereo imaging glasses, comprises:
a scene camera, an AR display lens;
the scene shooting head is connected with the central processing unit and the AR display lens and is used for shooting video pictures in a visual area of a user and sending the video pictures to the central processing unit and the AR display lens;
the AR display lens is connected with the wireless communication unit and used for receiving and displaying the patrol record data sent by the wireless communication unit.
In some embodiments, the AR display lens comprises:
and the patrol record data is displayed around the scene object in the actual scene picture in a floating frame mode in an overlapping mode.
In some embodiments, the central processor comprises:
the system comprises a scene object extraction module, a scene object classification module, a face recognition module, a number plate recognition module and an identification recognition module;
the scene object extraction module is connected with the scene camera and used for extracting independent scene objects from the video pictures sent by the scene camera;
the scene object classification module is connected with the scene object extraction module and is used for classifying the independent scene objects into object types, vehicle types, parking space digital identification types and traffic fire protection identification types;
the face recognition module is connected with the scene object classification module and is used for extracting face region characteristic information in the scene object of the character type;
the license plate identification module is connected with the scene object classification module and is used for positioning and extracting vehicle license plate information in the scene object of the vehicle type;
the identification recognition module is connected with the scene object classification module and is used for positioning and extracting parking space digital identification information in the scene object of the parking space digital identification type and positioning and extracting fire protection identification characters and pattern information in the scene object of the traffic fire protection identification type.
In some embodiments, the scene object extraction module comprises:
and extracting independent scene objects from the video picture by means of closed edge detection.
In some embodiments, the community patrol cloud server comprises:
a face registration database, a community vehicle number plate database and a community parking space database;
the face registration database is used for inquiring the identity record data of the person according to the face region characteristic information in the face recognition module;
the community vehicle number plate database is used for inquiring the relevant information of the vehicle according to the vehicle number plate information in the number plate identification module;
and the community parking space database is used for inquiring the registration information corresponding to the parking space according to the vehicle digital identification information in the identification module.
In accordance with the above object, in a second aspect of the present application, there is also provided a method for using a portable stereoscopic imaging device for smart community patrol, comprising:
the method comprises the steps that a scene camera shoots a video picture in a visual area and sends the video picture to a central processing unit;
the central processing unit extracts the relevant information of the scene object in the video picture;
the wireless communication unit sends the relevant information to a community patrol cloud server;
the community patrol cloud server inquires corresponding patrol record data according to the relevant information;
the wireless communication module sends the patrol record data to the AR stereo imaging glasses for display;
and performing corresponding patrol according to the display information of the AR stereo imaging glasses.
In some embodiments, the central processor extracts information about scene objects in the video pictures, including:
extracting independent scene objects in the video pictures through a scene object extraction module;
classifying the independent scene objects into object types, vehicle types, parking place digital identification types and traffic fire protection identification types through a scene object classification module;
extracting the characteristic information of a face region of a scene object of a character type through a face recognition module;
the scene object of the vehicle type is positioned and the vehicle license plate information is extracted through a license plate identification module;
positioning the scene object of the parking place digital identification type through an identification module and extracting parking place digital identification information;
the scene object of the traffic fire-fighting identification type is positioned and extracted by the identification module, and the character and pattern information of the traffic or fire-fighting identification is extracted.
In some embodiments, the querying, by the community patrol cloud server, the corresponding patrol record data according to the relevant information includes:
inquiring a face grade database according to the face region characteristic information to obtain the identity record data of the corresponding person;
inquiring a community vehicle number plate database according to the vehicle number plate information to obtain relevant information of corresponding vehicles;
and inquiring a community parking space database according to the vehicle digital identification information to acquire corresponding parking space registration information, comparing the parking space registration information with the vehicle number plate information corresponding to the parking space, and judging whether to occupy parking spaces of other people.
In some embodiments, the parking space registration information includes:
the property of the parking space, the owner of the parking space and the vehicle number plate corresponding to the owner.
The embodiment of the application provides a portable stereo imaging equipment for wisdom community patrols and guards against, and one of them kind of portable stereo imaging equipment that is used for wisdom community to patrol and guard against includes: the system comprises AR three-dimensional imaging glasses, a central processing unit, a wireless communication unit and a community patrol cloud server; the AR stereo imaging glasses are connected with the central processing unit and the wireless communication unit and are used for shooting and displaying video pictures in a visual area, sending the video pictures to the central processing unit and displaying patrol record data sent by the wireless communication unit; the central processing unit is connected with the AR stereo imaging glasses and is used for receiving video pictures of the AR stereo imaging glasses and extracting relevant information from the video pictures; the wireless communication unit is connected with the community patrol cloud server and is used for uploading relevant information of the central processing unit to the community patrol cloud server and sending patrol record data of the community patrol cloud server to the AR three-dimensional imaging glasses for display; the community patrol cloud server is connected with the wireless communication unit and used for inquiring community patrol records stored by the cloud server according to the relevant information of the central processing unit, acquiring relevant patrol record data and sending the patrol record data to the wireless communication unit. The utility model provides a portable stereoscopic imaging equipment for wisdom community patrols and guards and self-organizing method thereof for example, gather the video picture and extract relevant information in patrolling security personnel visual area, the community of inquiring about the record of patrolling and guarding cloud ware storage according to relevant information, thereby supplementary security personnel obtain the relevant community of scene object in the visual range and patrol and guard the record, accomplish the patrol and guard to the appointed region of community, the data that acquire is accurate, and utilize AR stereoscopic imaging technique will patrol and guard the actual scene object periphery of information in security personnel visual range and carry out the stack display, can accurately know the relevant information in the community, and manage behaviors such as occupation of parking stall, management and patrol and guard efficiency have been improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a schematic structural diagram of a portable stereoscopic imaging device for smart community patrol according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a central processing unit extracting scene objects in a video frame according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an AR display lens according to an embodiment of the present application;
fig. 4 is a flowchart of a method for using a portable stereo imaging device for smart community patrol according to an embodiment of the present application;
FIG. 5 is a flowchart of step S402 of an embodiment of the present application;
fig. 6 is a flowchart of step S404 in the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Specifically, as shown in fig. 1, the structure of a portable stereoscopic imaging device for smart community patrol according to an embodiment of the present application is schematically illustrated. A portable stereoscopic imaging device for smart community patrol of this embodiment can include:
the system comprises AR three-dimensional imaging glasses 1, a central processing unit 2, a wireless communication unit 3 and a community patrol cloud server 4;
the AR stereo imaging glasses 1 are connected with the central processing unit 2 and the wireless communication unit 3, and are used for shooting video pictures in a visual area, sending the video pictures to the central processing unit 2, and displaying patrol record data sent by the wireless communication unit 3; the AR stereo imaging glasses 1 are worn by security personnel;
the central processing unit 2 is connected with the AR stereo imaging glasses 1 and is used for receiving video pictures of the AR stereo imaging glasses 1 and extracting relevant information from the video pictures;
the wireless communication unit 3 is connected with the community patrol cloud server 4, and is used for uploading relevant information of the central processing unit 2 to the community patrol cloud server 4, and sending patrol record data received from the community patrol cloud server 4 to the AR stereoscopic imaging glasses 1 for display;
it should be noted that the central processing unit 2 and the wireless communication unit 3 may be integrated on the AR stereo imaging glasses 1, or may be separately installed on a device to operate.
The community patrol cloud server 4 is connected with the wireless communication unit 3 (the wireless communication unit 3 and the community patrol cloud server 4 are in two-way wireless communication), and is used for inquiring community patrol records stored by the cloud server according to relevant information of the central processing unit 2, acquiring relevant patrol record data and sending the patrol record data to the wireless communication unit 3.
In the embodiment, video pictures in the visual area of the patrol security guard are collected and relevant information is extracted, and the community patrol record stored by the cloud server is inquired according to the relevant information, so that the auxiliary security guard obtains the community patrol record relevant to the scene object in the visual range, the patrol of the specified area of the community is completed, the obtained data is accurate, the AR stereo imaging technology is utilized to display the patrol information in the periphery of the actual scene object in the visual range of the security guard in a superposition manner, the relevant information in the community can be accurately known, the behaviors of occupying parking spaces and the like are managed, and the management and patrol efficiency is improved.
In one embodiment, the AR stereoscopic imaging glasses 1 include:
a scene camera 5, an AR display lens 6;
the scene shooting head is connected with the central processing unit 2 and is used for shooting video pictures in the visual area of a user and sending the video pictures to the central processing unit 2;
the AR display lens 6 is connected to the wireless communication unit 3, and is configured to receive and display the patrol record data sent by the wireless communication unit 3.
Specifically, the patrol record data is displayed in a floating frame manner in a superimposed manner around the scene object in the actual scene picture.
In this embodiment, will patrol and prevent the information and carry out the stack display in the actual scene object periphery of security personnel visual scope, can accurately understand the relevant information in the community to manage behaviors such as occupation of a parking stall, improved management and patrol and prevent efficiency.
In one embodiment, as shown in fig. 2, the central processing unit 2 includes:
a scene object extraction module 7, a scene object classification module 8, a face recognition module 9, a number plate recognition module 10 and an identification recognition module 11;
the scene object extraction module 7 is connected to the scene camera, and is configured to extract an independent scene object from the video picture sent by the scene camera, where a square area in fig. 2 is an extracted independent object.
Wherein, each person a, b, vehicle c, d, parking place digital mark e, f and traffic fire-fighting mark g appearing in the video picture can be used as an independent scene object;
specifically, the independent scene object is extracted from the video picture by means of closed edge detection, and the specific algorithm may adopt the following method:
① edge detection using Sobel operator, the specific steps are as follows:
A. respectively moving the two directional templates from one pixel to another pixel along the image, and enabling the central pixel of the template to coincide with a certain pixel position in the image;
B. multiplying the coefficients within the template with their corresponding image pixel values;
C. adding all products;
D. assigning the maximum value of the two convolutions to a pixel corresponding to the center position of the template in the image as a new gray value of the pixel;
E. and taking a proper threshold value TH, and if the new gray value of the pixel is more than or equal to TH, judging the pixel point as an edge point.
② edge detection using LOG operator (which is an optimal filter for obtaining detected edge by using image signal-to-noise ratio), the method comprises the following steps:
A. and performing low-pass smoothing filtering on the image through a Gaussian function, wherein the Gaussian function is as follows:
Figure BDA0002281768930000101
wherein, σ represents the standard deviation of the Gaussian filter and is used for reflecting the smoothness degree of the image; low-pass filtering the image f (x, y) to obtain f (x, y) × G (x, y, σ);
B. high-pass filtering is performed by a Laplacia operator as follows:
Figure BDA0002281768930000111
wherein the content of the first and second substances,
Figure BDA0002281768930000112
expressing LOG operator, and obtaining the following formula through calculation:
Figure BDA0002281768930000113
in the above formula, x, y is-I, …, -1,0,1, … I, I ∈ [1, ∞), σ represents a scale space constant of a gaussian distribution, and the size of the LOG operator is set to be
Figure BDA0002281768930000114
C. Edges of the image are detected using second derivative zero crossings.
③ edge detection using Canny operator, the specific steps are as follows:
A. filtering the image f (x, y) with a gaussian function to obtain a smoothed data array, as shown in the following equation:
S(x,y)=f(x,y)*G(x,y,σ)
wherein σ represents a dispersion parameter of the gaussian formula, reflecting the degree of smoothing;
B. for gradient calculation, the gradient of the smoothed data array S (x, y) can be approximated by a 2 × 2 first order finite difference approximation to calculate two arrays P (x, y) and Q (x, y) of x and y partial derivatives, as shown below:
P(x,y)≈(S(x,y+1)-S(x,y)+S(x+1,y+1)-S(x+1,y))/2
Q(x,y)≈(S(x,y)-S(x+1,y)+S(x,y+1)-S(x+1,y+1))/2
C. calculating the inverse partial gradients of the same point x and y in the image, wherein the amplitude and azimuth are respectively as follows:
Figure BDA0002281768930000121
θ(x,y)=arctan(Q(x,y)/P(x,y))
D. and (3) carrying out non-maximum value suppression, calculating the gradient amplitude of two adjacent pixels in the pixel gradient direction for each pixel on the image M (x, y), wherein if the gradient amplitude of the current pixel is not less than the two values, the current pixel can be an edge point, otherwise, the gradient amplitude of the pixel is a non-edge pixel. Thinning the image edge into a pixel width, and obtaining an image NMS (x, y) from the gradient amplitude image M (x, y) through non-maximum value inhibition;
E. and performing double-threshold detection and edge connection, namely extracting edges by using a high threshold Th and a low threshold TI, obtaining strong edge points and weak edge points of edge images through a high threshold value and a low threshold value respectively for each pixel point of the NMS (x, y), tracking the edges in the strong edge points, searching the edge points in the corresponding positions of the weak edge points of the images to connect the breaks in the strong edge points when the edges reach a terminal point, continuously searching and tracking the edges, and connecting the breaks of the edges in the strong edge points of the high threshold value images.
The scene object classification module 8 is connected with the scene object extraction module 7 and is used for classifying the independent scene objects into a human object type, a vehicle type, a parking space digital identification type and a traffic fire protection identification type;
the face recognition module 9 is connected to the scene object classification module 8, and is configured to extract face region feature information in the scene object of the character type; a and b in fig. 2 represent face region feature information;
the license plate identification module 10 is connected with the scene object classification module 8 and is used for positioning and extracting vehicle license plate information in the scene object of the vehicle type; c, d in fig. 2 represent vehicle number plate information;
the identification recognition module 11 is connected with the scene object classification module 8 and is used for positioning and extracting parking space digital identification information in the scene object of the parking space digital identification type and positioning and extracting fire protection identification characters and pattern information in the scene object of the traffic fire protection identification type; e and f in fig. 2 represent the parking space digital identification information; g represents the speed limit 40.
In one embodiment, the community patrol cloud server 4 includes:
a face registration database 12, a community vehicle number plate database 13 and a community parking space database 14;
the face registration database 12 is used for querying the identity record data of the person according to the face region feature information in the face recognition module 9.
For example, if the person in the video screen of the visual area is an owner, the name, address, telephone number, and special attribute of the owner (for example, the owner is an elderly person with dementia, etc.) are obtained; if the person in the visual scene is a non-owner, it is obtained whether the person has corresponding identity record data, such as whether there is a record of actions such as posting a small advertisement in a community, and even whether the person is a wanted person. As shown in fig. 3, h displayed on the AR display lens 6 indicates zhang san, amateur, and building No. 5, and i indicates lie si and li, which are small advertisements in the community.
The community vehicle number plate database 13 is used for inquiring the relevant information of the vehicle according to the vehicle number plate information in the number plate identification module 10.
For example, the name, address, phone number of the owner to whom the vehicle belongs, or obtaining that the vehicle is not a registered vehicle by a community owner; and obtaining information such as illegal parking records of the vehicle in the community. As shown in fig. 3, j displayed by the AR display lens 6 indicates what is seven of the owner, the registered vehicle 777, and the telephone 186 × × × × × × × × × ×, and k indicates eight weeks, non-owner, and two parking violations.
The community parking space database 14 is used for inquiring the registration information corresponding to the parking space according to the parking space digital identification information in the identification module 11.
For example, the nature of the parking spot (fixed, temporary), the owner and the number plate of the vehicle; thereby compare with vehicle license plate information inquiry, can judge whether to exist and occupy other people fixed parking stall. As shown in fig. 3, the found parking space number identification information of the parking space 55 is the owner king five parking space and the license plate number 99999, the license plate number 99999 is compared with the license plate information 80881 on the parking space, the result is the occupied parking space, and m displayed by the AR display lens 6 represents the parking space 55, the license plate number 80881, and the occupied parking space.
In this embodiment, inquire the relevant information of personage, vehicle, parking stall to show on AR three-dimensional formation of image glasses, can be clear understand every independent object, improved the efficiency of patrolling and defending, know relevant data information through the database contrast, the rate of accuracy is higher, is favorable to the management of security personnel to the community.
Fig. 4 is a flowchart illustrating a method for using a portable stereoscopic imaging device for smart community patrol according to an embodiment of the present application. The use method of the portable three-dimensional imaging device for intelligent community patrol of the embodiment can comprise the following steps:
s401: the method comprises the steps that a scene camera shoots a video picture in a visual area and sends the video picture to a central processing unit; the AR stereo imaging glasses are worn by security personnel;
s402: the central processing unit extracts the relevant information of the scene object in the video picture;
s403: the wireless communication unit sends the relevant information to a community patrol cloud server;
s404: the community patrol cloud server inquires corresponding patrol record data according to the relevant information;
s405: the wireless communication module sends the patrol record data to the AR stereo imaging glasses for display; the patrol record data is displayed around the video picture in a floating frame mode;
s406: and performing corresponding patrol according to the display information of the AR stereo imaging glasses.
In one embodiment, as shown in fig. 5, step S402 includes:
s4021, extracting independent scene objects in the video picture through a scene object extraction module; specifically, an independent scene object is extracted from the video picture by a closed edge detection mode;
s4022, classifying the independent scene objects into an object type, a vehicle type, a parking space digital identification type and a traffic fire protection identification type through a scene object classification module;
s4023, extracting the feature information of the human face region by the human face recognition module of the scene object of the character type;
s4024, positioning the scene object of the vehicle type through a license plate identification module, and extracting the information of the vehicle license plate;
s4025, positioning the scene object of the parking place digital identification type through an identification module, and extracting parking place digital identification information;
s4026, positioning the scene object of the traffic fire-fighting identification type through the identification module, and extracting the character and pattern information of the traffic or fire-fighting identification.
In one embodiment, as shown in fig. 6, step S404 includes:
s4041, inquiring a face grade database according to the face region characteristic information to obtain the identity record data of the corresponding person.
For example, if the person in the video screen of the visual area is an owner, the name, address, telephone number, and special attribute of the owner (for example, the owner is an elderly person with dementia, etc.) are obtained; if the person in the visual scene is a non-owner, it is obtained whether the person has corresponding identity record data, such as whether there is a record of actions such as posting a small advertisement in a community, and even whether the person is a wanted person.
S4042, inquiring a community vehicle number plate database according to the vehicle number plate information to obtain relevant information of corresponding vehicles.
For example, the name, address, phone number of the owner to whom the vehicle belongs, or obtaining that the vehicle is not a registered vehicle by a community owner; and obtaining information such as illegal parking records of the vehicle in the community.
S4043, inquiring a community parking space database according to the vehicle digital identification information, acquiring corresponding parking space registration information, comparing the parking space registration information with the vehicle number plate information corresponding to the parking space, and judging whether parking spaces of other people are occupied.
For example, the nature of the parking spot (fixed, temporary), the owner and the number plate of the vehicle; thereby compare with vehicle license plate information inquiry, can judge whether to exist and occupy other people fixed parking stall.
The application embodiment of the application provides a portable three-dimensional imaging equipment's application method for wisdom community patrols and guards, when security personnel worn AR display lens to patrol and guard, to each personage, vehicle, the parking stall that appear in its visual scene scope, all can show immediately and read the record data that patrol and guard that correspond, provides abundant facility for patrolling and guard work.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A portable stereoscopic imaging device for smart community patrol, comprising:
the system comprises AR three-dimensional imaging glasses, a central processing unit, a wireless communication unit and a community patrol cloud server;
the AR stereo imaging glasses are connected with the central processing unit and the wireless communication unit and are used for shooting video pictures in a visual area, sending the video pictures to the central processing unit and displaying patrol record data sent by the wireless communication unit;
the central processing unit is connected with the AR stereo imaging glasses and is used for receiving video pictures of the AR stereo imaging glasses and extracting relevant information from the video pictures;
the wireless communication unit is connected with the community patrol cloud server and is used for uploading relevant information of the central processing unit to the community patrol cloud server and sending patrol record data received from the community patrol cloud server to the AR three-dimensional imaging glasses for displaying;
the community patrol cloud server is connected with the wireless communication unit and used for inquiring community patrol records stored by the cloud server according to the relevant information of the central processing unit, acquiring relevant patrol record data and sending the patrol record data to the wireless communication unit.
2. The portable stereoscopic imaging device for intelligent community patrol as recited in claim 1, wherein said AR stereoscopic imaging glasses comprise:
a scene camera, an AR display lens;
the scene shooting head is connected with the central processing unit and the AR display lens and is used for shooting video pictures in a visual area of a user and sending the video pictures to the central processing unit and the AR display lens;
the AR display lens is connected with the wireless communication unit and used for receiving and displaying the patrol record data sent by the wireless communication unit.
3. The portable stereoscopic imaging device for intelligent community patrol as recited in claim 2, wherein the AR display lens comprises:
and the patrol record data is displayed around the scene object in the video picture in a floating frame mode in an overlapping mode.
4. The portable stereoscopic imaging device for intelligent community patrol as recited in claim 1, wherein the central processor comprises:
the system comprises a scene object extraction module, a scene object classification module, a face recognition module, a number plate recognition module and an identification recognition module;
the scene object extraction module is connected with the scene camera and used for extracting independent scene objects from the video pictures sent by the scene camera;
the scene object classification module is connected with the scene object extraction module and is used for classifying the independent scene objects into object types, vehicle types, parking space digital identification types and traffic fire protection identification types;
the face recognition module is connected with the scene object classification module and is used for extracting face region characteristic information in the scene object of the character type;
the license plate identification module is connected with the scene object classification module and is used for positioning and extracting vehicle license plate information in the scene object of the vehicle type;
the identification recognition module is connected with the scene object classification module and is used for positioning and extracting parking space digital identification information in the scene object of the parking space digital identification type and positioning and extracting fire protection identification characters and pattern information in the scene object of the traffic fire protection identification type.
5. The portable stereoscopic imaging device for intelligent community patrol as claimed in claim 5, wherein the scene object extraction module comprises:
and extracting independent scene objects from the video picture by means of closed edge detection.
6. The portable stereoscopic imaging device for smart community patrol as recited in claim 1, wherein the community patrol cloud server comprises:
a face registration database, a community vehicle number plate database and a community parking space database;
the face registration database is used for inquiring the identity record data of the person according to the face region characteristic information in the face recognition module;
the community vehicle number plate database is used for inquiring the relevant information of the vehicle according to the vehicle number plate information in the number plate identification module;
and the community parking space database is used for inquiring the registration information corresponding to the parking space according to the vehicle digital identification information in the identification module.
7. A use method of a portable stereoscopic imaging device for smart community patrol is characterized by comprising the following steps:
the method comprises the steps that a scene camera shoots a video picture in a visual area and sends the video picture to a central processing unit;
the central processing unit extracts the relevant information of the scene object in the video picture;
the wireless communication unit sends the relevant information to a community patrol cloud server;
the community patrol cloud server inquires corresponding patrol record data according to the relevant information;
the wireless communication module sends the patrol record data to the AR stereo imaging glasses for display;
and performing corresponding patrol according to the display information of the AR stereo imaging glasses.
8. The method as claimed in claim 7, wherein said cpu extracts information related to scene objects in said video frames, and comprises:
extracting independent scene objects in the video pictures through a scene object extraction module;
classifying the independent scene objects into object types, vehicle types, parking place digital identification types and traffic fire protection identification types through a scene object classification module;
extracting the characteristic information of a face region of a scene object of a character type through a face recognition module;
the scene object of the vehicle type is positioned and the vehicle license plate information is extracted through a license plate identification module;
positioning the scene object of the parking place digital identification type through an identification module and extracting parking place digital identification information;
the scene object of the traffic fire-fighting identification type is positioned and extracted by the identification module, and the character and pattern information of the traffic or fire-fighting identification is extracted.
9. The use method of the portable stereoscopic imaging device for smart community patrol as claimed in claim 7, wherein the querying the corresponding patrol record data by the community patrol cloud server according to the relevant information comprises:
inquiring a face grade database according to the face region characteristic information to obtain the identity record data of the corresponding person;
inquiring a community vehicle number plate database according to the vehicle number plate information to obtain relevant information of corresponding vehicles;
and inquiring a community parking space database according to the vehicle digital identification information to acquire corresponding parking space registration information, comparing the parking space registration information with the vehicle number plate information corresponding to the parking space, and judging whether to occupy parking spaces of other people.
10. The method of claim 9, wherein the parking space registration information comprises:
the property of the parking space, the owner of the parking space and the vehicle number plate corresponding to the owner.
CN201911144353.8A 2019-11-20 2019-11-20 Portable three-dimensional imaging equipment for intelligent community patrol and use method Pending CN111199184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911144353.8A CN111199184A (en) 2019-11-20 2019-11-20 Portable three-dimensional imaging equipment for intelligent community patrol and use method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911144353.8A CN111199184A (en) 2019-11-20 2019-11-20 Portable three-dimensional imaging equipment for intelligent community patrol and use method

Publications (1)

Publication Number Publication Date
CN111199184A true CN111199184A (en) 2020-05-26

Family

ID=70746660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911144353.8A Pending CN111199184A (en) 2019-11-20 2019-11-20 Portable three-dimensional imaging equipment for intelligent community patrol and use method

Country Status (1)

Country Link
CN (1) CN111199184A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770087A (en) * 2020-12-25 2021-05-07 山东爱城市网信息技术有限公司 AR technology community 3D large-screen interaction method
CN113823112A (en) * 2021-07-31 2021-12-21 浙江慧享信息科技有限公司 Park parking space reservation auxiliary system and auxiliary method based on 3D projection

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427504A (en) * 2015-12-24 2016-03-23 重庆甲虫网络科技有限公司 Wireless intelligent augmented reality firefighting monitoring system
CN106651715A (en) * 2016-09-22 2017-05-10 山东华旗新能源科技有限公司 Smart safe community management system and method
CN106780188A (en) * 2016-12-16 2017-05-31 巫溪县致恒科技有限公司 A kind of wisdom cell system integrated platform
CN108259827A (en) * 2018-01-10 2018-07-06 中科创达软件股份有限公司 A kind of method, apparatus for realizing security protection, AR equipment and system
CN108717679A (en) * 2018-08-16 2018-10-30 江西省创海科技有限公司 A kind of community management platform and system
CN108924262A (en) * 2018-08-17 2018-11-30 嘉善力通信息科技股份有限公司 A kind of intelligence peace cell managing and control system based on cloud platform
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses
CN109165597A (en) * 2018-08-24 2019-01-08 福建铁工机智能机器人有限公司 A kind of visiting method based on wisdom rural area AI system
CN109325605A (en) * 2018-11-06 2019-02-12 国网河南省电力公司驻马店供电公司 Electric power based on augmented reality AR technology believes logical computer room inspection platform and method for inspecting

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427504A (en) * 2015-12-24 2016-03-23 重庆甲虫网络科技有限公司 Wireless intelligent augmented reality firefighting monitoring system
CN106651715A (en) * 2016-09-22 2017-05-10 山东华旗新能源科技有限公司 Smart safe community management system and method
CN106780188A (en) * 2016-12-16 2017-05-31 巫溪县致恒科技有限公司 A kind of wisdom cell system integrated platform
CN108259827A (en) * 2018-01-10 2018-07-06 中科创达软件股份有限公司 A kind of method, apparatus for realizing security protection, AR equipment and system
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses
CN108717679A (en) * 2018-08-16 2018-10-30 江西省创海科技有限公司 A kind of community management platform and system
CN108924262A (en) * 2018-08-17 2018-11-30 嘉善力通信息科技股份有限公司 A kind of intelligence peace cell managing and control system based on cloud platform
CN109165597A (en) * 2018-08-24 2019-01-08 福建铁工机智能机器人有限公司 A kind of visiting method based on wisdom rural area AI system
CN109325605A (en) * 2018-11-06 2019-02-12 国网河南省电力公司驻马店供电公司 Electric power based on augmented reality AR technology believes logical computer room inspection platform and method for inspecting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔涛 等: ""基于视频处理的智慧社区监控***"", 《信息记录材料》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770087A (en) * 2020-12-25 2021-05-07 山东爱城市网信息技术有限公司 AR technology community 3D large-screen interaction method
CN113823112A (en) * 2021-07-31 2021-12-21 浙江慧享信息科技有限公司 Park parking space reservation auxiliary system and auxiliary method based on 3D projection
CN113823112B (en) * 2021-07-31 2023-01-03 浙江慧享信息科技有限公司 Park parking space reservation auxiliary system and auxiliary method based on 3D projection

Similar Documents

Publication Publication Date Title
US10043097B2 (en) Image abstraction system
JP4970195B2 (en) Person tracking system, person tracking apparatus, and person tracking program
CN100565555C (en) Peccancy parking detector based on computer vision
CN109271554A (en) A kind of intelligent video identifying system and its application
WO2004042673A2 (en) Automatic, real time and complete identification of vehicles
WO2012090200A1 (en) Calibration device and method for use in a surveillance system for event detection
CN206322194U (en) A kind of anti-fraud face identification system based on 3-D scanning
CN104504904B (en) A kind of means of transportation mobile collection method
CN105976392B (en) Vehicle tyre detection method and device based on maximum output probability
KR101874498B1 (en) System and Method for Aerial Photogrammetry of Ground Control Point for Space Information Acquisition based on Unmanned Aerial Vehicle System
CN109740444A (en) Flow of the people information displaying method and Related product
CN112102409A (en) Target detection method, device, equipment and storage medium
CN111199184A (en) Portable three-dimensional imaging equipment for intelligent community patrol and use method
CN113627339A (en) Privacy protection method, device and equipment
Saini et al. DroneRTEF: development of a novel adaptive framework for railroad track extraction in drone images
CN109889773A (en) Method, apparatus, equipment and the medium of the monitoring of assessment of bids room personnel
CN110324589A (en) A kind of monitoring system and method for tourist attraction
US20180089500A1 (en) Portable identification and data display device and system and method of using same
CN103591953B (en) A kind of personnel positioning method based on single camera
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN111708907A (en) Target person query method, device, equipment and storage medium
US20230334675A1 (en) Object tracking integration method and integrating apparatus
CN113743380B (en) Active tracking method based on video image dynamic monitoring
CN105157681B (en) Indoor orientation method, device and video camera and server
CN110991316B (en) Method for automatically acquiring shape and identity information applied to open environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526