CN116824726A - Campus environment intelligent inspection method and system - Google Patents

Campus environment intelligent inspection method and system Download PDF

Info

Publication number
CN116824726A
CN116824726A CN202310429084.XA CN202310429084A CN116824726A CN 116824726 A CN116824726 A CN 116824726A CN 202310429084 A CN202310429084 A CN 202310429084A CN 116824726 A CN116824726 A CN 116824726A
Authority
CN
China
Prior art keywords
campus
inspection
video
information
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310429084.XA
Other languages
Chinese (zh)
Inventor
唐宏宇
黄华林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202310429084.XA priority Critical patent/CN116824726A/en
Publication of CN116824726A publication Critical patent/CN116824726A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of campus patrol, and particularly discloses an intelligent campus environment patrol method and system, wherein the method comprises the steps of regularly acquiring the positions of all patrol ends, and positioning a problem section in a campus map according to the positions; acquiring acquisition information of each inspection end based on the problem interval; generating a detection report according to the acquired information; in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information. The invention acquires videos through the inspection terminals and identifies the videos, adjusts the movement speed in real time in the identification process, acquires position information in real time by a background to calculate the movement speed, estimates a problem area according to the movement speed inverse estimation, reads the videos acquired by the inspection terminals according to the positioning result, and carries out subsequent supervision; the invention has definite supervision target, high utilization rate of computing resources and high flexibility, can be rapidly adapted to different areas and has excellent portability.

Description

Campus environment intelligent inspection method and system
Technical Field
The invention relates to the technical field of campus inspection, in particular to an intelligent campus environment inspection method and system.
Background
The students are the main body in the campus, and in the learning process, parents or teachers urge, the students play and the learning problem puzzles can lead the students to face extremely high pressure, so that the students with less mental maturity can easily influence the emotion, and some deviant behaviors appear. Therefore, the campus environment needs to be supervised, so that teachers can find student problems in time.
However, the existing supervision methods mostly rely on fixed cameras, and although this method can ensure the comprehensiveness of information, the identification process is very difficult, the required computing resources are very high, and the flexibility is low. How to provide a supervision architecture with higher flexibility is a technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide an intelligent campus environment inspection method and system, which are used for solving the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an intelligent tour inspection method for campus environment, the method comprising:
acquiring a campus map, and determining the number of inspection ends and the movement path of the inspection ends according to the campus map;
the method comprises the steps of acquiring the positions of all inspection ends at regular time, and positioning a problem section in a campus map according to the positions;
acquiring acquisition information of each inspection end based on the problem interval;
generating a detection report according to the acquired information;
in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information.
As a further scheme of the invention: the step of acquiring the campus map and determining the number of inspection ends and the movement path thereof according to the campus map comprises the following steps:
acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images, and determining the positions of personnel;
marking a passing area in the campus map by taking the position of the personnel as the center;
dividing the campus map according to the passing area to obtain subareas;
determining a motion path based on the passing area, and determining acquisition parameters according to the subareas and the passing area; the acquisition parameters include definition and video acquisition angle.
As a further scheme of the invention: the step of regularly acquiring the positions of all the inspection terminals and positioning the problem section in the campus map according to the positions comprises the following steps:
the position of each inspection end is obtained at regular time, and the movement speed of each inspection end is calculated;
comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in a campus map according to the video acquisition angle and the position of the inspection end;
counting all suspicious regions and marking times thereof, and marking the corresponding suspicious regions as problem regions when the marking times reach preset conditions;
wherein, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and merged.
As a further scheme of the invention: the step of acquiring the acquired information of each inspection terminal based on the problem interval comprises the following steps:
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
acquiring the position of a problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversing and matching result;
and when the distance between the position of the problem section and the position label is smaller than a preset distance threshold value, extracting the acquired information.
As a further scheme of the invention: in the running process of the inspection terminal, acquiring video information according to preset frequency, and adjusting the movement speed in real time according to the video information comprises the following steps:
acquiring video information according to a preset frequency, and converting the video information into a gray video;
carrying out normalization processing on the gray video, and carrying out regional segmentation on the gray video according to a normalization processing result;
comparing the region segmentation results of the adjacent frame images in real time, and calculating the difference rate;
when the difference rate reaches a preset threshold value, extracting an audio segment in the video information by taking a time point of the image as a center;
and identifying the audio frequency segment, and adjusting the movement speed in real time according to the identification result.
As a further scheme of the invention: the calculation formula for converting the video information into the gray video is as follows:
the formula for carrying out normalization processing on the gray video is as follows:
wherein I is t (x, y) is the gray value of the point (x, y) at the time t, R t (x,y)、G t (x, y) and B t (x, y) are RGB values for the point in time t (x, y), respectively; j (J) t (x, y) is a normalized value of the point in time (x, y) of t.
The technical scheme of the invention also provides an intelligent campus environment inspection system, which comprises:
the patrol terminal setting module is used for acquiring a campus map, and determining the number of patrol terminals and the movement path of the patrol terminals according to the campus map;
the problem interval positioning module is used for acquiring the position of each inspection end at regular time and positioning a problem interval in the campus map according to the position;
the acquisition information acquisition module is used for acquiring the acquisition information of each inspection end based on the problem interval;
the report generation module is used for generating a detection report according to the acquired information;
in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information.
As a further scheme of the invention: the inspection terminal setting module comprises:
the personnel position determining unit is used for acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images and determining personnel positions;
the passing area marking unit is used for marking a passing area in the campus map by taking the position of the person as the center;
the map segmentation unit is used for segmenting the campus map according to the passing area to obtain subareas;
the acquisition parameter determining unit is used for determining a motion path based on the passing area and determining acquisition parameters according to the subarea and the passing area; the acquisition parameters include definition and video acquisition angle.
As a further scheme of the invention: the problem interval positioning module comprises:
the speed calculation unit is used for acquiring the position of each inspection end at regular time and calculating the movement speed of each inspection end;
the relative position calculating unit is used for comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
the interval marking unit is used for inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in the campus map according to the video acquisition angle and the position of the inspection end;
the interval counting unit is used for counting all suspicious intervals and marking times thereof, and marking the corresponding suspicious areas as problem intervals when the marking times reach preset conditions;
wherein, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and merged.
As a further scheme of the invention: the acquisition information acquisition module comprises:
the traversal matching unit is used for acquiring the position of the problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversal matching result;
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
and when the distance between the position of the problem section and the position label is smaller than a preset distance threshold value, extracting the acquired information.
Compared with the prior art, the invention has the beneficial effects that: the invention acquires videos through the inspection terminals and identifies the videos, adjusts the movement speed in real time in the identification process, acquires position information in real time by a background to calculate the movement speed, estimates a problem area according to the movement speed inverse estimation, reads the videos acquired by the inspection terminals according to the positioning result, and carries out subsequent supervision; the invention has definite supervision target, high utilization rate of computing resources and high flexibility, can be rapidly adapted to different areas and has excellent portability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flow chart of a campus environment intelligent patrol method.
Fig. 2 is a first sub-flowchart block diagram of a campus environment intelligent patrol method.
Fig. 3 is a second sub-flowchart block diagram of the intelligent tour inspection method for campus environment.
Fig. 4 is a block diagram of the composition structure of the intelligent patrol system in the campus environment.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
Fig. 1 is a flow chart of an intelligent campus environment inspection method, in an embodiment of the invention, the method includes:
step S100: acquiring a campus map, and determining the number of inspection ends and the movement path of the inspection ends according to the campus map;
the campus map is default stored data, a top view in the campus building model is read, the campus map can be obtained, and the number of inspection ends and the movement path of the inspection ends are determined according to the region situation in the campus map; the movement paths of the plurality of inspection ends are the same; when the inspection end moves to finish a movement path, videos of most areas in the campus can be obtained; the number of the inspection terminals is determined by the staff according to the situation, and the more the number of the inspection terminals is, the more the acquired videos are, the better the inspection effect is.
Step S200: the method comprises the steps of acquiring the positions of all inspection ends at regular time, and positioning a problem section in a campus map according to the positions;
in the moving process of the inspection end, video information is acquired according to preset frequency, and moving speed is adjusted in real time according to the video information, so that the positions of all the inspection ends are acquired regularly, the speed of each inspection end can be calculated according to the positions and time (the data processing end calculates the speed of the inspection end according to the positions), generally, if the inspection end detects abnormality, the video is reduced, and the video with longer duration is acquired, so that a section with a slow speed is a section with a problem, namely the problem section.
Step S300: acquiring acquisition information of each inspection end based on the problem interval;
the information collected by the inspection end is sent to the data processing end at regular time, the data are normal data, the analysis value is very low, and the transmission value is not natural; in other words, only a part of the data in each inspection terminal is useful data (data corresponding to the problem section), and such data may be acquired and analyzed.
Step S400: generating a detection report according to the acquired information;
the process of generating the detection report according to the acquired information is easy, and the process can be completed by means of the artificial end with the identification algorithm, the acquired information is sent to the artificial end, and the detection report fed back by the artificial end is received.
Fig. 2 is a first sub-flowchart of a campus environment intelligent patrol method, where the step of obtaining a campus map and determining the number of patrol terminals and the movement path thereof according to the campus map includes:
step S101: acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images, and determining the positions of personnel;
on a campus map, each road is preset, but sometimes students walk some "small roads" which are determined by the positions of the people; in the preprocessing stage before the method is used, campus images containing temperature information are acquired for a plurality of times according to a preset aerial track (the process can also be completed by a camera arranged in the campus); identifying the campus image according to the temperature information, and determining the position of the personnel;
step S102: marking a passing area in the campus map by taking the position of the personnel as the center;
the area which can pass through is marked as a passing area according to the inquiry of the personnel position in the campus map;
step S103: dividing the campus map according to the passing area to obtain subareas;
the process of dividing the campus map is not complex, and other areas belong to the non-passable area on the premise that the passing area is marked; of course, the segmentation result of the campus map is dynamic, and if some areas in the non-passable area are passed, the corresponding partial areas are also marked as passing areas;
step S104: determining a motion path based on the passing area, and determining acquisition parameters according to the subareas and the passing area; the acquisition parameters comprise definition and video acquisition angles;
the inspection end moves in the passing area, a movement path is determined in the passing area, and when the inspection end moves on the movement path, videos are continuously shot; the acquisition parameters at each position need to be preset, and the setting target is as large as possible so that the video range acquired by the inspection terminal is larger.
Fig. 3 is a second sub-flowchart of the intelligent tour inspection method for campus environment, where the step of periodically obtaining the position of each tour inspection terminal and locating the problem section in the campus map according to the position includes:
step S201: the position of each inspection end is obtained at regular time, and the movement speed of each inspection end is calculated;
the data processing end receives the position signals sent by the inspection end at regular time, and the movement speed of each inspection end can be calculated according to the combination time of the position signals;
step S202: comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
according to the above, the inspection terminal can recognize the acquired video in the operation process, and if the acquired video is found to have problems, the movement speed can be regulated, so that whether the inspection terminal has problems can be judged remotely according to the movement speed.
Step S203: inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in a campus map according to the video acquisition angle and the position of the inspection end;
the video acquisition angle belongs to acquisition parameters, and the acquisition parameters are related to the position; the relative position of the inspection end in the motion path can inquire the corresponding video acquisition angle, and the corresponding area is marked in the campus map and is used as an in-doubt section.
Step S204: counting all suspicious regions and marking times thereof, and marking the corresponding suspicious regions as problem regions when the marking times reach preset conditions;
after the former inspection end marks a certain area as a suspicious interval, if the latter inspection end moves to the position, the area is still marked as the suspicious interval, and the probability of the suspicious interval indeed having abnormality is very high; therefore, the number of times the suspicious interval is marked is an important parameter.
Further, the preset condition is determined by the manager, if the manager wants to perform higher-level supervision on the campus, the condition can be set to be once, and at this time, if only one patrol terminal marks a certain area, the area enters the next-level supervision link.
It should be noted that, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and combined; the areas marked by the two inspection ends only partially overlap, and the areas are considered to be the same area and are combined.
As a preferred embodiment of the present invention, the step of acquiring the acquired information of each inspection end based on the problem interval includes:
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
the storage process of the acquired information is limited by the limiting content, the marking result of each inspection end on the area is synchronized to other inspection ends, and the inspection ends mark the acquired video of the corresponding period when moving to the areas marked by the other inspection ends.
Acquiring the position of a problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversing and matching result;
when a manager wants to inquire the video at a certain position, the data of the extracted mark is inquired in each inspection terminal.
The method includes that the position of a section is replaced by a section center point, and in the step of traversing and matching position labels in information acquired by all acquisition ends according to the position of a problem section, when the distance between the position of the problem section and the position labels is smaller than a preset distance threshold value, the acquired information is extracted; the purpose of this limitation is to expand the amount of video (different time periods) corresponding to a certain location as much as possible, facilitating the subsequent recognition process.
In a preferred embodiment of the present invention, in the running process of the inspection end, the step of acquiring video information according to a preset frequency and adjusting the movement speed in real time according to the video information includes:
acquiring video information according to a preset frequency, and converting the video information into a gray video;
carrying out normalization processing on the gray video, and carrying out regional segmentation on the gray video according to a normalization processing result;
comparing the region segmentation results of the adjacent frame images in real time, and calculating the difference rate;
when the difference rate reaches a preset threshold value, extracting an audio segment in the video information by taking a time point of the image as a center;
and identifying the audio frequency segment, and adjusting the movement speed in real time according to the identification result.
The above content limits the working process of the inspection end, and the process of adjusting the movement speed of the inspection end is an essential technical feature of the technical scheme of the invention; the inspection terminal is internally provided with a video recognition algorithm, video information is obtained according to preset frequency, and the video information can be converted into gray video according to a preset image conversion formula; the gray value is one-dimensional data compared with the color value, but the range of the gray value is [0,255] and is difficult to process, so that the gray video is further processed, namely normalized, the range of the values of each point is adjusted, and the gray video can be segmented according to the normalized video; the processing of the video is to process the images in the video in sequence in practice; the region segmentation process is one of contour recognition, and pixel points with smaller differences are gathered into one type to obtain a sub-region.
The difference between two images can be calculated by comparing the images in adjacent time in real time, if the difference exists, the content in the video is changed, at the moment, the audio information is required to be extracted, the audio information is identified, the identification mode can be text identification, then whether a plurality of sensitive words exist or not is judged, and the movement speed can be adjusted in real time according to whether the sensitive words exist or not.
In an example of the technical solution of the present invention, the calculation formula for converting the video information into the gray video is:
the formula for carrying out normalization processing on the gray video is as follows:
wherein I is t (x, y) is the gray value of the point (x, y) at the time t, R t (x,y)、G t (x, y) and B t (x, y) are RGB values for the point in time t (x, y), respectively; j (J) t (x, y) is a normalized value of the point in time (x, y) of t.
Example 2
Fig. 4 is a block diagram of a composition structure of an intelligent inspection system for a campus environment, in an embodiment of the present invention, the system 10 includes:
the patrol terminal setting module 11 is used for acquiring a campus map, and determining the number of patrol terminals and the movement path of the patrol terminals according to the campus map;
the problem interval positioning module 12 is configured to obtain the position of each inspection end at regular time, and position a problem interval in the campus map according to the position;
the acquired information acquisition module 13 is used for acquiring the acquired information of each inspection end based on the problem interval;
a report generation module 14 for generating a detection report according to the acquired information;
in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information.
The inspection terminal setting module 11 includes:
the personnel position determining unit is used for acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images and determining personnel positions;
the passing area marking unit is used for marking a passing area in the campus map by taking the position of the person as the center;
the map segmentation unit is used for segmenting the campus map according to the passing area to obtain subareas;
the acquisition parameter determining unit is used for determining a motion path based on the passing area and determining acquisition parameters according to the subarea and the passing area; the acquisition parameters include definition and video acquisition angle.
The problem area locating module 12 includes:
the speed calculation unit is used for acquiring the position of each inspection end at regular time and calculating the movement speed of each inspection end;
the relative position calculating unit is used for comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
the interval marking unit is used for inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in the campus map according to the video acquisition angle and the position of the inspection end;
the interval counting unit is used for counting all suspicious intervals and marking times thereof, and marking the corresponding suspicious areas as problem intervals when the marking times reach preset conditions;
wherein, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and merged.
The acquired information acquisition module 13 includes:
the traversal matching unit is used for acquiring the position of the problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversal matching result;
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
and when the distance between the position of the problem section and the position label is smaller than a preset distance threshold value, extracting the acquired information.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. The intelligent campus environment inspection method is characterized by comprising the following steps:
acquiring a campus map, and determining the number of inspection ends and the movement path of the inspection ends according to the campus map;
the method comprises the steps of acquiring the positions of all inspection ends at regular time, and positioning a problem section in a campus map according to the positions;
acquiring acquisition information of each inspection end based on the problem interval;
generating a detection report according to the acquired information;
in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information.
2. The intelligent campus environment inspection method according to claim 1, wherein the step of obtaining a campus map and determining the number of inspection terminals and the movement path thereof according to the campus map comprises:
acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images, and determining the positions of personnel;
marking a passing area in the campus map by taking the position of the personnel as the center;
dividing the campus map according to the passing area to obtain subareas;
determining a motion path based on the passing area, and determining acquisition parameters according to the subareas and the passing area; the acquisition parameters include definition and video acquisition angle.
3. The intelligent campus environment inspection method according to claim 1, wherein the step of acquiring the position of each inspection terminal at regular time and locating the problem section in the campus map according to the position comprises:
the position of each inspection end is obtained at regular time, and the movement speed of each inspection end is calculated;
comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in a campus map according to the video acquisition angle and the position of the inspection end;
counting all suspicious regions and marking times thereof, and marking the corresponding suspicious regions as problem regions when the marking times reach preset conditions;
wherein, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and merged.
4. The campus environment intelligent patrol method according to claim 3, wherein the step of acquiring the collected information of each patrol terminal based on the problem section comprises:
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
acquiring the position of a problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversing and matching result;
and when the distance between the position of the problem section and the position label is smaller than a preset distance threshold value, extracting the acquired information.
5. The intelligent campus environment inspection method according to claim 1, wherein during the operation of the inspection terminal, video information is obtained according to a preset frequency, and the step of adjusting the movement speed in real time according to the video information comprises the steps of:
acquiring video information according to a preset frequency, and converting the video information into a gray video;
carrying out normalization processing on the gray video, and carrying out regional segmentation on the gray video according to a normalization processing result;
comparing the region segmentation results of the adjacent frame images in real time, and calculating the difference rate;
when the difference rate reaches a preset threshold value, extracting an audio segment in the video information by taking a time point of the image as a center;
and identifying the audio frequency segment, and adjusting the movement speed in real time according to the identification result.
6. The intelligent campus environment inspection method according to claim 5, wherein the calculation formula for converting the video information into gray video is:
the formula for carrying out normalization processing on the gray video is as follows:
wherein I is t (x, y) is the gray value of the point (x, y) at the time t, R t (x,y)、G t (x, y) and B t (x, y) are RGB values for the point in time t (x, y), respectively; j (J) t (x, y) is a normalized value of the point in time (x, y) of t.
7. A campus environment intelligent patrol system, the system comprising:
the patrol terminal setting module is used for acquiring a campus map, and determining the number of patrol terminals and the movement path of the patrol terminals according to the campus map;
the problem interval positioning module is used for acquiring the position of each inspection end at regular time and positioning a problem interval in the campus map according to the position;
the acquisition information acquisition module is used for acquiring the acquisition information of each inspection end based on the problem interval;
the report generation module is used for generating a detection report according to the acquired information;
in the running process of the inspection end, video information is obtained according to preset frequency, and the movement speed is adjusted in real time according to the video information.
8. The campus environment intelligent patrol system according to claim 7, wherein the patrol terminal setting module comprises:
the personnel position determining unit is used for acquiring campus images containing temperature information at fixed time according to a preset aerial track, identifying the campus images and determining personnel positions;
the passing area marking unit is used for marking a passing area in the campus map by taking the position of the person as the center;
the map segmentation unit is used for segmenting the campus map according to the passing area to obtain subareas;
the acquisition parameter determining unit is used for determining a motion path based on the passing area and determining acquisition parameters according to the subarea and the passing area; the acquisition parameters include definition and video acquisition angle.
9. The campus environment intelligent patrol system according to claim 7, wherein the problem interval positioning module comprises:
the speed calculation unit is used for acquiring the position of each inspection end at regular time and calculating the movement speed of each inspection end;
the relative position calculating unit is used for comparing the movement speed with a preset speed value, and calculating the relative position of the inspection end in the movement path when the movement speed is smaller than the preset speed value;
the interval marking unit is used for inquiring a video acquisition angle according to the relative position, and marking a suspicious interval in the campus map according to the video acquisition angle and the position of the inspection end;
the interval counting unit is used for counting all suspicious intervals and marking times thereof, and marking the corresponding suspicious areas as problem intervals when the marking times reach preset conditions;
wherein, in the step of counting all the suspicious intervals and the marking times thereof, the suspicious intervals with intersections are regarded as the same interval and merged.
10. The campus environment intelligent patrol system according to claim 9, wherein the collected information acquisition module comprises:
the traversal matching unit is used for acquiring the position of the problem interval, traversing and matching position labels in the acquired information of each acquisition end according to the position of the problem interval, and extracting the acquired information according to the traversal matching result;
when any inspection end marks a suspicious interval in a calibration map, reading the relative position of the inspection end, and sending the relative position to other inspection ends to serve as a suspicious position;
when any inspection end moves to a suspicious position, inserting a position tag into the acquired information;
and when the distance between the position of the problem section and the position label is smaller than a preset distance threshold value, extracting the acquired information.
CN202310429084.XA 2023-04-20 2023-04-20 Campus environment intelligent inspection method and system Pending CN116824726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310429084.XA CN116824726A (en) 2023-04-20 2023-04-20 Campus environment intelligent inspection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310429084.XA CN116824726A (en) 2023-04-20 2023-04-20 Campus environment intelligent inspection method and system

Publications (1)

Publication Number Publication Date
CN116824726A true CN116824726A (en) 2023-09-29

Family

ID=88122985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310429084.XA Pending CN116824726A (en) 2023-04-20 2023-04-20 Campus environment intelligent inspection method and system

Country Status (1)

Country Link
CN (1) CN116824726A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292457A (en) * 2023-11-22 2023-12-26 成都易电云商工程服务有限公司 Intelligent inspection management system
CN117314129A (en) * 2023-11-29 2023-12-29 北京爱可生信息技术股份有限公司 Park safety inspection system based on Internet of things

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292457A (en) * 2023-11-22 2023-12-26 成都易电云商工程服务有限公司 Intelligent inspection management system
CN117292457B (en) * 2023-11-22 2024-02-06 成都易电云商工程服务有限公司 Intelligent inspection management system
CN117314129A (en) * 2023-11-29 2023-12-29 北京爱可生信息技术股份有限公司 Park safety inspection system based on Internet of things
CN117314129B (en) * 2023-11-29 2024-02-02 北京爱可生信息技术股份有限公司 Park safety inspection system based on Internet of things

Similar Documents

Publication Publication Date Title
CN116824726A (en) Campus environment intelligent inspection method and system
CN107123131B (en) Moving target detection method based on deep learning
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
CN109191255B (en) Commodity alignment method based on unsupervised feature point detection
CN110555420B (en) Fusion model network and method based on pedestrian regional feature extraction and re-identification
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN107169994A (en) Correlation filtering tracking based on multi-feature fusion
CN113888754B (en) Vehicle multi-attribute identification method based on radar vision fusion
CN110728252A (en) Face detection method applied to regional personnel motion trail monitoring
CN112381043A (en) Flag detection method
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
CN111680610A (en) Construction scene abnormity monitoring method and device
CN115311617A (en) Method and system for acquiring passenger flow information of urban rail station area
CN113947714B (en) Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN113487166A (en) Chemical fiber floating filament quality detection method and system based on convolutional neural network
CN117475353A (en) Video-based abnormal smoke identification method and system
CN109215059A (en) Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
CN114708645A (en) Object identification device and object identification method
CN116310263A (en) Pointer type aviation horizon instrument indication automatic reading implementation method
CN115719464A (en) Water meter durability device water leakage monitoring method based on machine vision
CN108734158B (en) Real-time train number identification method and device
CN113591705B (en) Inspection robot instrument identification system and method and storage medium
CN114782860A (en) Violent behavior detection system and method in monitoring video
CN112529938A (en) Intelligent classroom monitoring method and system based on video understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination