KR101788225B1 - Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing - Google Patents

Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing Download PDF

Info

Publication number
KR101788225B1
KR101788225B1 KR1020150034767A KR20150034767A KR101788225B1 KR 101788225 B1 KR101788225 B1 KR 101788225B1 KR 1020150034767 A KR1020150034767 A KR 1020150034767A KR 20150034767 A KR20150034767 A KR 20150034767A KR 101788225 B1 KR101788225 B1 KR 101788225B1
Authority
KR
South Korea
Prior art keywords
tracking
image
descriptor
learning
classifier
Prior art date
Application number
KR1020150034767A
Other languages
Korean (ko)
Other versions
KR20160109761A (en
Inventor
지석호
김진우
서종원
Original Assignee
서울대학교산학협력단
한양대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교산학협력단, 한양대학교 산학협력단 filed Critical 서울대학교산학협력단
Priority to KR1020150034767A priority Critical patent/KR101788225B1/en
Publication of KR20160109761A publication Critical patent/KR20160109761A/en
Application granted granted Critical
Publication of KR101788225B1 publication Critical patent/KR101788225B1/en

Links

Images

Classifications

    • G06K9/00771
    • G06K9/627

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention preliminarily learns the image characteristics of the heavy equipment and the workers present in the construction site in consideration of the characteristics of the various construction sites and then recognizes the heavy equipment and the workers through the real- To form data on its position coordinates.

Description

TECHNICAL FIELD [0001] The present invention relates to a method and system for recognizing and tracking a heavy equipment / worker utilizing a construction image analysis technique,

The present invention relates to a method and system for recognizing and tracking a heavy equipment / worker utilizing a construction image-based image analysis technique, and more particularly, to a method and system for recognizing and tracking an image characteristic of a heavy equipment and a worker existing in a construction site, The present invention relates to a method and system for recognizing and tracking a heavy equipment and a worker through real-time photographed images of an actual construction site and forming data on the coordinates of the position.

As the construction site becomes larger and complex, the number, type, and location of workers and heavy equipment that are put into the construction site for more efficient construction project management (safety management at the construction site, productivity management in the construction process, etc.) The demand for "construction site situation information" such as the information (worker-heavy equipment status information) and the actual work time, idling time, and travel route information of the workers and heavy equipment put in the construction site .

However, in the past, mainly the manager directly observed the construction site (direct observation), and collected data on the construction site situation information at the construction site. However, since the conventional data collection method is not only for collecting passive and non-real time data but also for inefficient data management, data collection of real-time and automatic construction site situation information The need is greatly increased.

As a solution to this problem, there is a suggestion to develop a technology for recognizing and tracking heavy equipment and workers at the construction site by using images acquired by CCTV installed in the field. Particularly, Korean Patent Registration No. 10-1425170 discloses an example of a technique of photographing an object using a photographing device such as a camera and then tracking the object using data of the photographed image.

However, according to the related art, it is difficult to apply it to the recognition and tracking of workers and heavy equipment according to the characteristics of various types of construction sites and workers and heavy equipment put there, and to provide position information on them. That is, the conventional technology is difficult to apply to a construction site, and for this reason, there is a problem that the utilization rate of the construction site is low.

Korean Patent Registration No. 10-1425170 (Announcement 2014 08.04).

The object of the present invention is to provide a technology for automatically collecting real-time construction site situation information on a construction site and recognizing and tracking workers and heavy equipment in accordance with various characteristics of a construction site to form position information thereon .

According to an aspect of the present invention, there is provided a learning module for generating a classifier for classifying a worker and a heavy equipment according to a background, A recognition module for recognizing heavy equipment and workers in a tracking image obtained by photographing a real construction site in real time; And a tracking module for recognizing the real-time position of the recognized tracked object and forming positional information on the tracked object, thereby recognizing and tracking the workers and the heavy equipment in real time at the construction site and forming the positional information thereon A worker and a heavy equipment recognition-tracking system in a construction site are provided.

In the recognition-tracking system of the present invention as described above, the learning module includes: a video photographing unit for photographing a construction site in real time to acquire a training image and a tracking image; An image storage unit for storing images acquired by the image capturing unit and constructing a DB; According to the designated task of the administrator, a positive sample (positive sample), which is a collection of images extracted from only the region containing the tracking target, is extracted from the learning image obtained by extracting only the region containing the tracking target from the learning images constructed in the DB, A training data set generating unit for generating a training data set including a negative sample which is a collection of images of a region other than the tracking target and a region to be traced; A first descriptor generating unit for generating descriptors for recognizing only the object to be traced for each image from the learning data set; And a learning data set, and sets a criterion that can distinguish the tracking target from other backgrounds, and determines whether or not an object in the captured image is a tracking target in accordance with the set criterion And a classifier generating unit for generating a classifier to be discriminated. Further, the recognizing module may include a video input unit, a video input unit for receiving a tracking image of a construction site, ; A second descriptor generation unit for calculating a descriptor for an area extracted from all frames of the tracking image received through the image input unit and calculating a descriptor of the extracted area while changing the size of the extracted area; An object recognizing unit for recognizing an object to be tracked by using a classifier created by the classifier generating unit to determine whether the classifier corresponds to a tracking object based on a descriptor transmitted from the second descriptor generating unit; And a recognition data storage unit for receiving and storing data on the type, position coordinates, size, and descriptor of the object to be tracked and forming a DB.

In particular, in the recognition-tracking system of the present invention, the tracking module may include a similarity calculation unit for calculating a similarity between the recognized tracking objects in consecutive frames of the tracking image captured by the image capturing unit; If the similarity calculated by the similarity calculation unit is equal to or greater than a predetermined similarity reference value, the object recognized in the consecutive frame of the photographed image is regarded as the same object, and the real time position coordinate of the object is regarded as the position information of the position coordinate of the tracking object. An object tracking unit; And a tracking data storage unit for storing location information of a tracking object transmitted from the object tracking unit and building the DB by providing the same to the manager, wherein the descriptor is present in the contour of the object of interest in the captured image A gradient directional histogram in which a direction vector for each point is expressed by a representative vector of the object, and a local binary pattern in which a luminance difference between neighboring pixel values is formed by a distribution diagram for each pixel of the image, and expressed by a vector.

In order to achieve the above object, in the present invention, as a preliminary preparation step, a construction site is photographed to acquire a learning image, and a learning process is performed using the acquired learning image to form a learning data set for a construction site A classifier generating step of forming a classifier for distinguishing between a worker who is to be traced and a heavy equipment in the background using the same; Recognition of Tracking Subject by Real - Time Imaging of Actual Construction Site; And a tracking step of the recognized tracked object, and recognizes and tracks the workers and the heavy equipment in real time in the construction site and forms the positional information on the same, thereby recognizing the workers and the heavy equipment in the construction site. / RTI >

In the method of the present invention as described above, the classifier generating step may include: collecting a learning image including a tracking object by photographing a construction site in real time by an image capturing unit; The manager designating a tracking target area including a tracking target in the video for each video frame of the taken learning video; A training data set consisting of a positive sample, which is a collection of images extracted only from the region containing the tracking target, and a negative sample, which is a collection of images of the region excluding the tracking target, by separating the image of the tracking target region and the image of the non- ; A descriptor calculating step of forming a descriptor for discriminating only the object to be traced for each image from the learning data set; And a learning data set, and sets a criterion that can distinguish the tracking target from other backgrounds, and determines whether or not an object in the captured image is a tracking target in accordance with the set criterion And generating a classifier to be discriminated. Further, the step of recognizing the tracking object may include a step of image capturing for real time tracking of the tracking object; And a descriptor for the extracted region extracted from all frames of the tracking image is calculated. The descriptor of the extracted region is calculated while changing the size of the extracted region, and the descriptor for the tracking image is used as the tracking And recognizing a tracking object by determining whether the target object corresponds to the target object.

Also, in the method according to the present invention, the tracking step of the recognized tracking object may include a similarity calculation step of calculating the similarity between the recognized tracking objects in successive frames of the tracking image; When the similarity degree is equal to or greater than a predetermined similarity reference value, the object recognized in consecutive frames of the photographed image is regarded as the same object, and the real time position coordinate of the object is regarded as the position information of the position coordinate of the tracking object. Target tracking step; And providing tracking target position information to an administrator and storing the tracking target position information.

According to the present invention, an image obtained in real time for the same construction site is used as a "learning image" before a recognition-tracking operation of the actual tracking object at the construction site is performed, Since the heavy equipment and the workers are recognized and tracked by the real-time image of the construction site at a desired point in time using the classifier thus created, the position coordinate detection rate of the tracking object is greatly improved and the accuracy is also greatly improved do.

In particular, according to the present invention, it is possible to accurately track the location of materials, heavy equipment, and workers at a construction site in real time, thereby improving safety and productivity at a construction site.

Further, according to the present invention, it is possible to grasp work pattern analysis, idling, waiting time, redundant work, etc. through accurate monitoring of the heavy equipment even in a large-sized and complex earthwork worksite, thereby leading to optimum movement and operation of heavy equipment So that productivity can be improved by providing efficient guidance for a plurality of heavy equipment collaborations.

Furthermore, according to the present invention, there is an advantage that the heavy equipment in operation at the construction site, the collision accident prevention notification between workers, the false operation notification, and the like can be realized in real time, thereby greatly enhancing the safety of the site.

1 is a schematic diagram of a system for recognizing and tracking a worker and a heavy equipment in a construction site according to the present invention.
2 is a schematic configuration diagram of a learning module.
Fig. 3 is a picture for showing an example of a positive sample of a learning data set and an example of a negative sample separately.
4 is a schematic configuration diagram of the recognition module.
Figure 5 is a schematic block diagram of a tracking module.
6 is a schematic flow chart of a method for recognizing and tracking workers and heavy equipment in a construction site according to the present invention.
7 is a schematic flowchart of a classifier generation step by learning.
8 is a schematic flow chart of the recognition step.
Figure 9 is a schematic flow chart of the tracking step.

Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. Although the present invention has been described with reference to the embodiments shown in the drawings, it is to be understood that the technical idea of the present invention and its essential structure and operation are not limited thereby.

FIG. 1 shows a schematic diagram of a worker and a heavy equipment recognition and tracking system (hereinafter abbreviated as "recognition-tracking system") at a construction site according to the present invention. 1, a recognition-tracking system according to the present invention comprises a training module 1, a recognition module 2 and a tracking module 3 .

The learning module (1) collects learning images in real time by shooting the construction site in real time. Based on this, the recognition and tracking of workers and heavy equipment are needed. In the tracking image obtained from the construction site in real time, And generates a classifier for recognizing the classifier.

In other words, in the present invention, as a preparatory work for the recognition and tracking of actual workers and heavy equipment, it is necessary to collect the images photographed in real time in the construction site in advance as learning images and use them to distinguish the workers and heavy equipment from the background The learning module 1 collects the learning image of the construction site in real time, and based on the collected learning image, the learning module 1 collects the workers and the heavy equipment in the background And a classifier for recognizing the worker and the heavy equipment according to the judgment criteria.

2 shows a schematic configuration diagram of the learning module 1. As shown in the figure, the learning module 1 includes an image capturing section 11, an image storage section 12, a learning data set A generating unit 13, a first descriptor / descriptor generating unit 14, and a classifier generating unit 15.

The image photographing unit 11 is composed of a camera, a CCTV, and the like, and photographs the construction site in real time. The image storage unit 12 stores the image captured by the image capturing unit 11, that is, the captured image, and constructs a DB.

According to the present invention, in order to recognize and track workers and heavy equipment from a real-time shot image of a construction site, a learning process must be performed in advance. Accordingly, the image capturing unit 11 acquires an image for a learning process for a construction site, that is, an image for learning, and the image storage unit 12 stores the captured image. The learning data set generating unit 13 According to the designation of the operator, a "learning data set" is generated from the learning images stored in the image storage unit 12 and constructed in the DB.

Specifically, the administrator designates an area in the captured image including the object to be tracked (the heavy equipment to be recognized and tracked and the worker) for each image frame at a predetermined time interval of the learning image stored in the image storage unit 12 And the learning data set generation unit 13 separates only the video of the area specified by the manager. For convenience, the area designated by the administrator to include the tracking object in the photographed image is referred to as " tracking object area ", and the area excluding this is referred to as " non-tracking object area ". When the image of the tracked area is separated from the image of the non-tracked area, a collection of images of the tracked area is called a "positive sample ", and a collection of images of the non- Quot; negative sample ". Fig. 3 is a photograph for illustrating the example of the positive sample and the example of the negative sample.

Since a lot of data is included in the photographed image, a collection of images created by the learning data set generator 13 and classified into positive samples and negative samples, respectively, corresponds to a "learning data set" . Since the separated images are used for the formation of classifiers, they are expressed using the term "learning".

As described above, the learning data set generation unit 13 generates a collection of images (a positive sample of the learning data set) obtained by extracting only the region including the heavy equipment and the workers from the images photographed for learning, and a collection of images And a negative sample of the learning data set).

Generally, in order to distinguish a specific object or background existing in a photographed image by using an image analysis technique, a "descriptor" in which each object or background existing in the photographed image is expressed mathematically is used. For example, if a specific object and a background are distinguished from each other based on the shape of an object in a photographed image, the descriptor is expressed mathematically so that the shape of the specific object can be distinguished from the background. Alternatively, If you distinguish between objects and backgrounds, then the mathematical representation of the color of that particular object becomes a descriptor.

In particular, in the field of image analysis technology, the term "Histogram of Oriented Gradient (hereinafter abbreviated as HOG)" and "Local Binary Pattern / LBP" ) "Is known. HOG is a descriptor that reflects the features of the external shape of the object by integrating the direction vectors for each point existing in the contour of the object of interest in the shot image and expressing the object as a vector. A method of calculating the HOG is already known. On the other hand, the LBP is a descriptor used for recognizing a human face, a pedestrian, etc. in a photographed image. A brightness histogram of each pixel of the photographed image with respect to a surrounding pixel value is formed as a histogram, It is expressed as a vector. This LBP corresponds to a descriptor reflecting characteristics of the internal shape of the object. A method of calculating LBP is already known.

Since HOG uses information about the contour of the object, that is, the contour of the object, if HOG is used as the descriptor, the object can be recognized even if the size of the object changes or the brightness of the image changes. Since the LBP uses information about the internal shape of the object, if the LBP is used as the descriptor, it is possible to recognize the object even if the size of the object changes or the brightness of the image changes.

In the present invention, the HOG and the LBP are used as descriptors for recognizing only the workers and the heavy equipment in the images taken at the construction site. Therefore, in the first descriptor generating section 14, the training data set generated from the training image by the training data set generating section 13, that is, the image of the tracking target region corresponding to the positive sample and the image of the tracking target region corresponding to the negative sample HOG and LBP are calculated for each image, and the calculated HOG and LBP are used as descriptors for the respective images.

The classifier generator 15 generates a classifier for distinguishing the heavy equipment of the construction site and the worker from other backgrounds in the photographed image. In other words, the classifier generating unit 15 finds the common points of the descriptors of the workers calculated from the learning data set, finds common points of the descriptors of the heavy equipment calculated from the learning data set, A classifier for determining whether the object in the photographed image is an employee, a heavy equipment, or a simple background is generated according to a set judgment criterion by setting a judgment criterion capable of distinguishing the heavy equipment from other backgrounds.

Since descriptors of workers and descriptors of heavy equipment calculated from the learning data set are expressed mathematically as "vectors ", the common point of these workers and heavy equipment descriptors is the position coordinate pointed by the vector. That is, the common range (spatial domain) indicated by the vectors corresponding to the descriptors of the workers and the heavy equipment is the common point. Therefore, the classifier generator 15 sets the "position coordinate range" of the vector corresponding to the descriptors of the worker and the heavy equipment as an element of the judgment reference, and judges whether or not the position coordinate range is included in the predetermined range . Therefore, the classifier generated by the classifier generating unit 15 can determine whether the object in the photographed image is an employee, a heavy equipment, or a heavy equipment according to whether a position coordinate range of descriptors of a worker and a heavy equipment expressed mathematically as a vector exists within a predetermined range, Or a simple background.

On the other hand, the "position coordinate range" of the vector corresponding to the descriptors of the worker and the heavy equipment in the classifier generating unit 15 can be calculated through the following process, for example. First, arbitrary vector elements are randomly selected from the vector, and an arbitrary range is set for the selected vector elements. The accuracy of the set range (correct answer rate) is obtained. If the calculated accuracy is equal to or higher than the preset accuracy set by the operator, this range is selected as the "judgment criterion". If the calculated accuracy is less than the predetermined value, the process returns to the step of selecting an arbitrary vector element, and the above process is repeated. Finally, the position coordinate range is created.

As described above, in the present invention, since HOG and LBP are used as descriptors for distinguishing workers and heavy equipment from the background, the classifier generating unit 15 generates a first descriptor from a training image shot at a construction site presented as a learning data set, The HOG and the LBP of the workers and the heavy equipment extracted by the extractor 14 find common points and generate a classifier to be used as a criterion to discriminate the objects of the photographed image. Therefore, as described later, the classifier uses HOG and LBP to determine whether there are heavy equipment and workers in the photographed image, what types of heavy equipment exist if there are any, and how many heavy equipment and the number of workers are.

Fig. 4 shows a schematic diagram of the recognition module 2. 3, the recognition module 2 performs the function of recognizing the heavy equipment and the worker in the tracking image obtained by photographing the actual construction site after the acquisition of the learning image is finished. As shown in FIG. 3, 21, a second descriptor generating unit 22, an object recognizing unit 23, and a recognition data storage unit 24.

The image input unit 21 receives a tracking image of a construction site that is photographed and transmitted in real time by the image capturing unit 11, and the received tracking image is sent to the second descriptor generating unit 22. The second descriptor generating unit 22 extracts an arbitrary region with various image sizes in all frames of the received tracking image, calculates HOG and LBP for the extracted region, and treats the calculated HOG and LBP as descriptors And transmits it to the object recognition unit 23. That is, the second descriptor generation unit 22 arbitrarily sets an image extraction region for all frames of the tracking image, extracts the image of the corresponding region, and calculates HOG and LBP for the image of the extraction region , The calculated HOG and LBP are transmitted to the object recognition unit 23 as a descriptor. Such a series of processes, that is, image extraction, calculation of HOG and LBP, and transmission to the object recognition unit 23, In every frame of the image, the size of the extracted region is repeatedly changed while varying.

The object recognition section 23 calculates the descriptor transmitted from the second descriptor generating section 22, that is, the second descriptor generating section 22, using the classifier created by the classifier generating section 15 Determine whether or not it corresponds to the tracked object (the heavy equipment and workers to be recognized and tracked) based on the HOG and LBP. That is, the object recognizing unit 23 operates the classifier to determine whether the HOG and LBP calculated and transmitted by the second descriptor generating unit 22 meet the predetermined criteria of the heavy equipment and the worker, If the criteria are met, the object in the image having the corresponding HOG and LBP is judged to be a heavy equipment or a worker and recognized as a tracking object.

When it is determined that the object is the heavy equipment or the worker, the object recognition unit 23 forms data on the object, that is, the type, position coordinates, size, and descriptor of the object to be tracked and transmits it to the recognition data storage unit 24, The storage unit 24 stores the received data and creates a DB.

Figure 5 shows a schematic diagram of the tracking module 3. As shown in FIG. 4, the tracking module 3 includes a similarity calculation unit 31, an object tracking unit (not shown), a tracking unit 32, and a tracking data storage unit 33. [

The similarity calculation unit 31 calculates the similarity between the tracking objects recognized in successive frames of the tracking image captured by the image capturing unit 11. Specifically, The similarity of the object to be tracked is calculated using the distance between the position coordinates of the object to be tracked. For example, the distance between the position coordinates of the tracking target recognized in two consecutive frames may be calculated, and the reciprocal of the calculated value may be regarded as the similarity of the tracking target.

The object tracking unit 32 tracks the object to be tracked by using the similarity calculated by the similarity calculating unit 31. Specifically, when the similarity received from the similarity calculating unit 31 is equal to or greater than a predetermined similarity reference value, The real time position coordinate of the object is transmitted to the tracking data storage unit 33 as the position information of the position coordinates of the tracking object. On the other hand, when the similarity calculation unit 31 If the received similarity is less than a preset reference similarity reference value, it is regarded that the object of the preceding image frame has disappeared and the tracking of the object is stopped.

The tracking data storage unit 33 provides location information of the tracked object to be transmitted to the manager and simultaneously stores the information to construct a DB.

Next, a specific configuration (hereinafter referred to as a " recognition-tracking method ") of recognizing and tracking the heavy equipment and the workers entered in the construction site in real time through the" recognition-tracking system " . Fig. 6 shows a schematic flow chart of a "recognition-tracking method" according to the present invention.

The recognition-tracking method of the present invention includes a classifier generation step (step 1) using a learning data set for a construction site, a recognition step (step 2) of a tracking object by real- (Step 3).

In the learning data set generation step (step 1) of the construction site, the construction site is photographed as a preliminary preparation step, and the learning image is acquired. By performing the learning process using the acquired learning image, An operation of forming a classifier for distinguishing between the classifiers is performed.

7 shows a schematic flowchart of a classifier generation step by learning. In the classifier generation step of learning as shown in FIG. 7, first, a construction site is photographed by the image capturing unit 11 in real time, (Step 1-1 / Real-time collection of learning images that include tracking objects).

For each image frame of the photographed learning image, the manager designates an area (tracking object area) including the tracking object in the image (step 1-2), and in this way, the learning data set generation part 13, A training data set is generated by separating the image of the tracking target area and the image of the non-tracking target area (step 1-3).

For the generated learning data set, the first descriptor generating unit 14 calculates HOG and LBP (step 1-4). In other words, HOG and LBPs of workers and heavy equipment are extracted from the training image of the construction site presented as the learning data set by using the first descriptor generating unit 14.

Subsequently, the classifier generation unit 15 finds commonality between the HOG and the LBPs of the workers and heavy equipment extracted by the first descriptor generation unit 14, determines the objects of the photographed image as a judgment criterion, And generates a classifier to be judged (Step 1-5).

When the generation of the classifier is completed through the series of learning processes, a real-time image of the actual worker and the heavy equipment is acquired, and the step of recognizing the worker and the heavy equipment, that is, the object to be traced, is performed in the acquired image (step 2). FIG. 8 shows a schematic flowchart of the recognition step. As shown in the figure, by real-time photographing of a construction site in which a worker and a heavy equipment need to be tracked by the image capturing unit 11 in real time, And tracing images containing workers (step 2-1).

By the operation of the recognition module 2, an extracted image is generated for the tracking image, and HOG and LBP are calculated for each extracted image, and the calculated HOG and LBP are judged by the classifier, Workers and heavy equipment (Step 2-2 / Recognition of Tracking Subject by Judgment and Classification Using Classifier).

In the tracking step (step 3), the employee and the heavy equipment, which are finally recognized using the tracking module 3, are continuously tracked in the captured tracking image to generate the position coordinates thereof and provide the same to the manager.

Specifically, FIG. 9 shows a schematic flowchart of the tracking step. In the tracking step, as shown in the figure, in the successive frames of the tracking image photographed by the image capturing section 11, A similarity calculation step of calculating the similarity by the similarity calculation unit 31 is performed (step 3-1). Subsequently, when the similarity calculated by the similarity calculation unit 31 is equal to or greater than a predetermined similarity reference value in accordance with the operation of the object tracking unit 32, the objects recognized in successive frames of the photographed image are regarded as the same object, The tracking target tracking step is performed in which the real time position coordinates of the tracking target are regarded as the tracking target position information (step 3-2). Finally, the location information of the tracking object transmitted from the object tracking unit 32 is provided to the manager, and the tracking data provision and storage step (step 3-3) is performed.

According to the recognition-tracking system and method according to the present invention, a real-time image obtained in real time for the same construction site is used as a "learning image" before the actual tracking- And the heavy equipment and the worker are tracked by the real-time image of the construction site at a desired point in time using the classifier thus constructed. Therefore, the position coordinate detection rate of the tracking object is greatly improved as compared with the prior art, Accuracy is also greatly improved. Particularly, according to the present invention, since the position of the heavy equipment and the worker can be accurately tracked in real time at the construction site, safety and productivity at the construction site can be improved.

Further, according to the recognition-tracking system and method according to the present invention, it is possible to grasp work pattern analysis, idling, waiting time, redundant operation and the like through precise monitoring of heavy equipment even in a large-sized and complex earthwork worksite, So that it is possible to improve the productivity by providing efficient guidance for a plurality of heavy equipment cooperatives.

Further, according to the recognition-tracking system and method according to the present invention, it is possible to real-time the heavy equipment in operation on the construction site, the collision accident prevention notice between workers, and the false operation notification, There is an advantage to be.

1: Learning module
2: recognition module
3: Tracking module
11:
12:
13:
14: First descriptor generating unit
15:
21:
22: Second descriptor generating unit
23: Object recognition unit
24: Recognition data storage unit
31:
32: object tracking unit
33: trace data storage unit

Claims (9)

A learning module (1) for collecting learning images by directly photographing the construction site in real time, and generating a classifier for recognizing the workers and the heavy equipment in the background based on the collected images;
A recognition module (2) for recognizing workers and heavy equipment in a tracking image obtained by photographing a real construction site in real time; And
And a tracking module (3) for recognizing a real time position of the recognized tracking object and forming position information on the object to be tracked;
The learning module (1)
An image capturing unit (11) for capturing a training image and a tracking image by photographing a construction site in real time;
An image storage unit 12 for storing images acquired by the image capturing unit 11 and constructing a DB;
According to the designated task of the administrator, a positive sample, which is a collection of images obtained by extracting only the region containing the tracking object from the learning image obtained by extracting only the region including the tracking object from the learning images constructed in DB stored in the image storage unit 12, a learning data set generation unit 13 for generating a learning data set consisting of a positive sample and a negative sample which is a collection of images of a region excluding a tracking target;
A first descriptor generation unit (14) for forming a descriptor for discriminating and recognizing only the object to be traced for each image from the learning data set; And
A common reference point of descriptors to be traced calculated from the learning data set is found to set a judgment criterion capable of distinguishing the traced object from other backgrounds and it is determined whether or not an object in the photographed image is to be traced according to the set judgment criterion And a classifier generating unit 15 for generating a classifier to be performed by the classifier;
The descriptor includes a gradient directional histogram (HOG) in which a direction vector for each point existing on an outline of an object to be tracked in a photographed image is represented by a representative vector for the object, And a local binary pattern (LBP) expressed by a vector constituted by a histogram of the brightness difference of the pixels;
The classifier generated by the classifier generator 15 determines that the object in the captured image is a tracking object when the position coordinate range of the descriptors of the tracking object mathematically represented by the vector is within the predetermined range, It will be judged as a simple background;
The recognition module (2)
An image input unit 21 for receiving a tracking image of a construction site that is photographed and transmitted in real time by the image capturing unit 11;
A second descriptor generating unit 22 for calculating a descriptor for the extracted region extracted from all the image frames of the tracking image received through the image input unit 21 and calculating the descriptor of the extracted region while changing the size of the extracted region, );
The classifier generated by the classifier generator 15 is used to determine whether or not the object corresponds to the object to be traced on the basis of the descriptor transmitted from the second descriptor generator 22, An object recognition unit 23 that forms data on the type, position coordinates, size, and descriptor of the object; And
And a recognition data storage unit (24) for storing the type, position coordinates, size, and descriptor data of the tracking object from the object recognition unit (23) and storing the data to form a DB;
The tracking module (3)
The distance between the position coordinates of the recognized tracking target in the continuous image frame of the tracking image captured by the image capturing unit 11 is calculated and the degree of similarity between the tracking targets is calculated by considering the inverse number of the calculated value as the similarity A similarity calculation unit 31;
When the similarity calculated by the similarity calculation unit 31 is equal to or greater than a preset similarity reference value, the real time position coordinate of the object is regarded as the position information of the tracking object by viewing the object recognized in the continuous frame of the shot image as the same object, An object tracking unit (32) for transmitting real-time position coordinates of an object as position information of a tracking object; And
And a tracking data storage unit (33) for providing location information of a tracking object transmitted from the object tracking unit (32) to a manager and storing the same and constructing a DB;
A system for recognizing and tracking workers and heavy equipments in a construction site in real time and forming positional information thereon.
delete delete delete delete As a preliminary preparation step, a construction site is photographed to acquire a learning image, and a learning process is performed using the acquired learning image to form a learning data set for a construction site. By using this data set, A classifier generating step (step 1) of forming a classifier for distinguishing the classifiers;
Recognition phase of the tracking object by real - time imaging on actual construction site (Step 2); And
And a tracking step (step 3) of the recognized tracking target;
The classifier generation step (step 1)
A step (1-1) of collecting a learning image including a tracking object by photographing the construction site in real time by the image capturing unit (11);
The manager designates the tracking target area including the tracking target in the video for each video frame of the training video (step 1-2);
A negative sample, which is a collection of positive samples, which is a collection of images obtained by extracting only the region containing the tracking object, and a group of images of the region excluding the tracking target, which are separated from the image of the tracking target region and the image of the non- (step 1-3) of generating a learning data set consisting of a plurality of training data sets;
A descriptor for discriminating and recognizing only a tracking object is formed for each image from the learning data set, and the descriptor is configured so that the direction vector for each point existing in the contour of the object to be tracked in the captured image is set to the object (HOG) represented by a representative vector for a hologram, and a local binary pattern (LBP) expressed by a vector constituted by histograms of the brightness difference between neighboring pixel values for each pixel of the captured image, (Step 1-4); And
A common reference point of descriptors to be traced calculated from the learning data set is found to set a judgment criterion capable of distinguishing the traced object from other backgrounds and it is determined whether or not an object in the photographed image is to be traced according to the set judgment criterion (Step 1-5);
The classifier generated in the classifier generating step (step 1-5) judges that an object in the photographed image is an object to be traced when the position coordinate range of the descriptors of the tracking object mathematically represented by the vector is within a predetermined range, If it goes out of the set range, it will be judged as a simple background;
The recognition step of the tracking object (step 2)
An image collection step (step 2-1) for real-time tracking of the tracking target; And
The descriptor of the extracted region extracted from all the frames of the tracking image is calculated, the descriptor of the extracted region is calculated while changing the size of the extracted region, and the descriptor of the extracted region is calculated using the classifier, (Step 2-2) of recognizing the tracking object by determining whether the target object corresponds to the tracking object (step 2-2);
The tracking step of the recognized tracked object (step 3)
A similarity calculation step (step 3-1) of calculating the distance between the position coordinates of the recognized tracking object in the continuous image frame of the tracking image and calculating the similarity between the tracking objects by considering the inverse number of the calculated value as the similarity, ;
When the calculated similarity is equal to or greater than a predetermined similarity reference value, the real-time position coordinates of the object are regarded as position information of the tracking object by viewing the objects recognized in successive frames of the captured image as the same object, A tracking step (step 3-2); And
(Step 3-3) of providing and storing real-time position coordinates of the object as positional information of the tracking object to the manager;
A method for tracking and recognizing workers and heavy equipment in a construction site, characterized by recognizing and tracking workers and heavy equipment in a construction site in real time and forming position information thereon.
delete delete delete
KR1020150034767A 2015-03-13 2015-03-13 Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing KR101788225B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150034767A KR101788225B1 (en) 2015-03-13 2015-03-13 Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150034767A KR101788225B1 (en) 2015-03-13 2015-03-13 Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing

Publications (2)

Publication Number Publication Date
KR20160109761A KR20160109761A (en) 2016-09-21
KR101788225B1 true KR101788225B1 (en) 2017-10-19

Family

ID=57079930

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150034767A KR101788225B1 (en) 2015-03-13 2015-03-13 Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing

Country Status (1)

Country Link
KR (1) KR101788225B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101972511B1 (en) 2017-11-23 2019-04-25 헤넷시스 주식회사 Communication method of master node and slave node
KR20190111585A (en) 2018-03-23 2019-10-02 연세대학교 산학협력단 Situational recognition system for construction site based vision and method, and method for productivity analysis of earthwork using it

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101967049B1 (en) * 2017-06-28 2019-04-09 경성대학교 산학협력단 Apparatus for providing object information for heavy machinery using lidar and camera
KR102063184B1 (en) 2017-12-29 2020-01-07 부산대학교산학협력단 Earthwork construction voice information providing system for construction equipment guidance, and method for the same
KR102089013B1 (en) * 2019-08-22 2020-03-13 최광수 System and method for providing supply chain management based heavy equipment management service assigning optimized resource at construction site
KR102315371B1 (en) * 2019-12-30 2021-10-21 한국남부발전 주식회사 Smart cctv control and warning system
KR102301425B1 (en) * 2020-08-14 2021-09-14 방병주 heavy equipment control system through selective sensing of objects
KR102301426B1 (en) * 2020-09-07 2021-09-14 방병주 heavy equipment control system through analysis of object action patterns
KR102244978B1 (en) * 2020-12-23 2021-04-28 주식회사 케이씨씨건설 Method, apparatus and computer program for training artificial intelligence model that judges danger in work site
CN116630752B (en) * 2023-07-25 2023-11-17 广东南方电信规划咨询设计院有限公司 Construction site target object identification method and device based on AI algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360349B1 (en) * 2013-10-18 2014-02-24 브이씨에이 테크놀러지 엘티디 Method and apparatus for object tracking based on feature of object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101425170B1 (en) 2010-11-16 2014-08-04 한국전자통신연구원 Object tracking apparatus and method of camera and secret management system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360349B1 (en) * 2013-10-18 2014-02-24 브이씨에이 테크놀러지 엘티디 Method and apparatus for object tracking based on feature of object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101972511B1 (en) 2017-11-23 2019-04-25 헤넷시스 주식회사 Communication method of master node and slave node
KR20190111585A (en) 2018-03-23 2019-10-02 연세대학교 산학협력단 Situational recognition system for construction site based vision and method, and method for productivity analysis of earthwork using it

Also Published As

Publication number Publication date
KR20160109761A (en) 2016-09-21

Similar Documents

Publication Publication Date Title
KR101788225B1 (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
CN108111818B (en) Moving target actively perceive method and apparatus based on multiple-camera collaboration
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN106447680B (en) The object detecting and tracking method that radar is merged with vision under dynamic background environment
KR101647370B1 (en) road traffic information management system for g using camera and radar
Kong et al. Detecting abandoned objects with a moving camera
CN103824070A (en) Rapid pedestrian detection method based on computer vision
CN106682619B (en) Object tracking method and device
CN103761514A (en) System and method for achieving face recognition based on wide-angle gun camera and multiple dome cameras
CN102542289A (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
Nair Camera-based object detection, identification and distance estimation
CN111145223A (en) Multi-camera personnel behavior track identification analysis method
CN114359976B (en) Intelligent security method and device based on person identification
KR20200069911A (en) Method and apparatus for identifying object and object location equality between images
CN107547865A (en) Trans-regional human body video frequency object tracking intelligent control method
Miller et al. Person tracking in UAV video
Dousai et al. Detecting humans in search and rescue operations based on ensemble learning
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
Xu et al. Smart video surveillance system
Debnath et al. Automatic visual gun detection carried by a moving person
CN110728249B (en) Cross-camera recognition method, device and system for target pedestrian
CN116912763A (en) Multi-pedestrian re-recognition method integrating gait face modes
Xu et al. A novel method for people and vehicle classification based on Hough line feature
CN116543023A (en) Multi-sensor target crowd intelligent tracking method based on correction deep SORT

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right