KR101958270B1 - Intelligent Image Analysis System using Image Separation Image Tracking - Google Patents

Intelligent Image Analysis System using Image Separation Image Tracking Download PDF

Info

Publication number
KR101958270B1
KR101958270B1 KR1020150171562A KR20150171562A KR101958270B1 KR 101958270 B1 KR101958270 B1 KR 101958270B1 KR 1020150171562 A KR1020150171562 A KR 1020150171562A KR 20150171562 A KR20150171562 A KR 20150171562A KR 101958270 B1 KR101958270 B1 KR 101958270B1
Authority
KR
South Korea
Prior art keywords
data
image
tracking
unit
input image
Prior art date
Application number
KR1020150171562A
Other languages
Korean (ko)
Other versions
KR20170065301A (en
Inventor
최명길
Original Assignee
홍진수
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 홍진수 filed Critical 홍진수
Priority to KR1020150171562A priority Critical patent/KR101958270B1/en
Publication of KR20170065301A publication Critical patent/KR20170065301A/en
Application granted granted Critical
Publication of KR101958270B1 publication Critical patent/KR101958270B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to an intelligent image analysis system using image segmentation image tracking, in which intelligent image analysis using image segmentation image tracking, which analyzes and captures an image of a captured image, three-dimensionally analyzes and tracks an object existing in the image System can be provided.

Description

 TECHNICAL FIELD The present invention relates to an intelligent image analysis system using image separation image tracking,

[0001] The present invention relates to an intelligent image analysis system using image segmentation image tracking, and more particularly, to an intelligent image analysis system using image segmentation image tracking, And an intelligent image analysis system using the same.

The present invention relates to an intelligent image analysis system using image segmentation image tracking.

Generally, CCTV is installed in various places around our life and it gives many benefits to make a safe life.

In particular, the development speed of CCTV is being promoted by the development of the Internet and the IT infrastructure, and it is preventing not only life crime but also crime prevention at home, school, kindergarten, hospital, Is also demonstrating its value.

In recent years, it has become an important means of responding to terrorism that is taking place all over the world.

However, in spite of this usefulness, real-time monitoring is limited.

There is a problem in that a monitoring agent for monitoring the CCTV is not only required but also the concentration of the monitoring agent is lowered with the lapse of time and the situation in which an event such as an accident or a crime occurs can not be detected in real time.

SUMMARY OF THE INVENTION The present invention has been made in an effort to solve the problems as described above, and an object of the present invention is to provide a method and apparatus for analyzing an object in an image, And to provide an image analysis system.

Another object of the present invention is to provide a method and apparatus for tracking an image in an image by three-dimensionally analyzing the image, analyzing the image, capturing the image in real time, And an intelligent image analysis system using the same.

According to an aspect of the present invention, there is provided an image processing apparatus including an image input unit for receiving input image data and then pre-processing the image data by applying a Gaussian filter;
A background data detector for detecting an input image data received by the image input unit as background data when an image is divided into parts and a velocity vector in each part is smaller than a predetermined threshold value;
A position table of a portion of the input image data received by the image input unit

Figure 112018087126647-pat00072
The velocity vector at
Figure 112018087126647-pat00073
An object data detector for generating object data when the calculated velocity value is greater than a threshold value set by the velocity vector;
A locus data generation unit for detecting the object data generated by the object data detection unit in the two-dimensional input image data to calculate an object size, a moving distance, and a moving direction, and generating the object object data as three-dimensional object locus data; And
An object tracking module for tracking an object through object trajectory data generated by the trajectory data generator;
And a monitoring unit for comparing the object locus data generated by the locus data generator with the preset protection zone data.

In addition, the object data detecting unit excludes background data from the input image data and detects the object.

In addition, the locus data generation unit includes a noise removal module that removes noise of object data using an image filter when analyzing object data.

The locus data generation unit generates new object data when the moving distance of the object is greater than a predetermined threshold value.

In addition, the locus data generation unit analyzes the optical flow of the object to generate object locus data.

As described above, according to the present invention, it is possible to provide an intelligent image analysis system using image segmentation image tracking in which a captured image is received, the image is analyzed, and objects existing in the image are analyzed in three dimensions and tracked in real time .

In addition, according to the present invention, there is provided an intelligent tracking system for tracking an object existing in an image by analyzing the received image, analyzing the object in three dimensions, real-time tracking, An image analysis system can be provided.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is an exemplary diagram illustrating a three-dimensional analysis of a two-dimensional image according to an embodiment of the present invention; FIG.
2 is an exemplary diagram illustrating an analysis of an object by analyzing optical flow of image data according to an embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings.

The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. To fully disclose the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

Hereinafter, the present invention will be described with reference to the drawings for explaining an intelligent image analysis system using image segmentation image tracking according to embodiments of the present invention.

An intelligent image analysis system using image segmentation image tracking according to the present invention includes an image input unit for receiving input image data, a background data detection unit for detecting background data in input image data received by the image input unit, An object data detecting unit for detecting the object data from the image data and detecting the object data generated by the object data detecting unit in the two-dimensional input image data to calculate the size, the moving distance, and the moving direction of the object, An object tracing module for tracing an object through the object trajectory data generated by the trajectory data generating unit, and an object tracing module for generating the object trajectory data generated by the trajectory data generating unit, And a monitoring unit for comparing with zone data It shall be.

In the intelligent image analysis system using the image segmentation image tracking according to the present invention, the object data detection unit excludes background data from the input image data and detects an object.

In the intelligent image analysis system using image segmentation image tracking according to the present invention, the trajectory data generator includes a noise elimination module for removing noise of object data using an image filter when analyzing object data.

In the intelligent image analysis system using image segmentation image tracking according to the present invention, the trajectory data generator generates new object data when the moving distance of the object is greater than a preset threshold value.

In the intelligent image analysis system using the image segmentation image tracking according to the present invention, the locus data generation unit analyzes object optical flow to generate object locus data.

FIG. 1 is an exemplary diagram illustrating a three-dimensional analysis of a two-dimensional image according to an exemplary embodiment of the present invention. FIG. 2 illustrates an example of analyzing an object by analyzing an optical flow of image data according to an exemplary embodiment of the present invention. It is an example.

There is a method of detecting moving objects through image tracking in 3D space and a background difference method.

When detecting a moving object, there are various defects such as detection error, movement of moving object, difficulty of direction determination.

The present invention exploits a method of detecting motion regions by applying background subtraction methods in combination.

By applying these two methods, it is possible to adaptively extract objects that are not sensitive to changes in illumination and have fast or slow movements.

In the preprocessing process, a Gaussian filter is applied to obtain a clean image.

The motion object region is detected by applying the proposed method from the noise-free image.

As a background modeling method, it is modeled and updated by MOG (Mixture Of Gaussian) method to adapt to real time background change.

This program introduces an optical flow technique that enables fast detection of moving objects and at the same time determines the direction of movement.

Calculation of Optical Flow Field (Speed Field)

Optical flow is defined as a noticeable movement of image brightness.

Figure 112015118518273-pat00001
In the image column
Figure 112015118518273-pat00002
In the image of vision
Figure 112015118518273-pat00003
The following two assumptions can be made when displaying the brightness of a pixel in a position.

1. Brightness

Figure 112015118518273-pat00004
Within a large part of the image,
Figure 112015118518273-pat00005
It is slick about. (Possibility of continuous differentiation)

2. The brightness of each point of a moving or stopping object does not change each time (constant with respect to time). Any object in the image or any point in one object

Figure 112015118518273-pat00006
After sight
Figure 112015118518273-pat00007
Brightness when moved to position
Figure 112015118518273-pat00008
Taylor's deployment is as follows.

Figure 112015118518273-pat00009

Figure 112015118518273-pat00010

Figure 112015118518273-pat00011

Figure 112015118518273-pat00012

At this time,

Figure 112015118518273-pat00013
, And defined by equation (4), the following equation (4), called optical flow constraint equation , is obtained. here
Figure 112015118518273-pat00014
,
Figure 112015118518273-pat00015
Respectively, of the optical flow field
Figure 112015118518273-pat00016
,
Figure 112015118518273-pat00017
Ingredients.

Generally, Equation 3 has more than one solution, so additional conditions are needed to solve this equation.

In this development, well-known Lucas & Kanade method is applied to calculate the optical flow field .

The Lucas & Kanade method divides the image into smaller parts and assumes that the velocity vector (optical flow vector) at each part is constant.

The solution of the optical flow constraint equation then results in the solution of the following 2x2 linear system by applying the least squares method.

Figure 112015118518273-pat00018

In Equation (5)

Figure 112015118518273-pat00019
Is a Gaussian window.

The Gaussian window can be represented by a combination of two separate binomial kernels.

Detection of the foreground / background using the Lucas & Kanade method

Figure 112015118518273-pat00020
The velocity vector at
Figure 112015118518273-pat00021
And then detects the moving object in the image.

The detection of the moving object is carried out from the optical flow field (or velocity field)

Figure 112015118518273-pat00022
The magnitude of the velocity vector
Figure 112015118518273-pat00023
Obtain a scalar field with

Next, a threshold value is adaptively derived from the average velocity magnitudes of the previous two frames and the current frame, and then a binary value matrix is obtained for the velocity field color field.

Dilation and erosion processing is performed on the obtained two-valued matrix and filtering processing is performed to obtain a foreground of moving objects.

Finally, calculate the speed of movement of objects from the velocity vectors of the foregrounds.

As a result, you get the foreground and moving velocity vectors of the moving objects.

As a result of the above, the motion target can be detected.

After detecting motion objects, it tracks the motion of the objects.

Figure 112015118518273-pat00024

Figure 112015118518273-pat00025

Tracking of motion objects is done by

Figure 112015118518273-pat00026
Th frame
Figure 112015118518273-pat00027
Th frame are expressed by Equations (6) and (7), respectively.

here

Figure 112015118518273-pat00028
and
Figure 112015118518273-pat00029
Respectively
Figure 112015118518273-pat00030
and
Figure 112015118518273-pat00031
The number of motion objects detected in the ith frame.

Figure 112015118518273-pat00032

Figure 112015118518273-pat00033

In addition, the velocity vector of each motion object is expressed by Equation (8) and Equation (9).

the problem is

Figure 112015118518273-pat00034
Each object in the < RTI ID = 0.0 >
Figure 112015118518273-pat00035
Is the target of the ith frame or the object of the new appearance.

In the present invention, the positional relationship, the magnitude, and the moving velocity vector of the objects are used for object detection.

This means that the moving direction of the moving objects does not change suddenly in a real situation.

There is a certain limit to the movement speed of motion objects. In other words, the distance of the moving object between two neighboring frames can not be larger than a certain value.

It is based on the assumption that the variance of the foreground area of the track being tracked is not very large.

Actual

Figure 112015118518273-pat00036
Each target in the ith frame
Figure 112015118518273-pat00037
Tracking in the following sequence.

Figure 112015118518273-pat00038
Frame
Figure 112015118518273-pat00039
The closest distance to
Figure 112015118518273-pat00040
And his distance
Figure 112015118518273-pat00041
.

Figure 112015118518273-pat00042

Figure 112015118518273-pat00043

if

Figure 112015118518273-pat00044
Is greater than a predetermined threshold value
Figure 112015118518273-pat00045
If
Figure 112015118518273-pat00046
And ends the trace.

Figure 112015118518273-pat00047
And determined from above
Figure 112015118518273-pat00048
The difference between the areas of the threshold values
Figure 112015118518273-pat00049
If larger
Figure 112015118518273-pat00050
And ends the trace.

Finally

Figure 112015118518273-pat00051
And determined from above
Figure 112015118518273-pat00052
Are compared with each other. For the comparison of the direction of movement, the angle between the velocity vectors of the two objects is used.

Figure 112015118518273-pat00053

At this time

Figure 112015118518273-pat00054
The threshold value < RTI ID = 0.0 >
Figure 112015118518273-pat00055
If greater than
Figure 112015118518273-pat00056
If this is not the case,
Figure 112015118518273-pat00057
end
Figure 112015118518273-pat00058
Th frame
Figure 112015118518273-pat00059
As shown in FIG.

Determination method of tracking object considering moving direction and moving distance.

First of all, the directionality and the moving distance are examined only for those whose history number exceeds the predetermined threshold value in the temporary target list. If the condition is satisfied, it is confirmed as a target of tracking.

An average moving distance, an average moving distance in the Y axis direction, an average moving direction angle Deviation vehicle  How to obtain.

The total sum of the distances between the first and subsequent points, the sum of the moving distances in the Y-axis direction, and the sum of the angular deviations are obtained with the history information for the temporary object.

Next, we divide these values by the number considered and obtain the averages, respectively, and use the general Euclidean distance as the travel distance.

That is, Equation 13 is obtained.

Figure 112015118518273-pat00060

The moving distance in the Y-axis direction

Figure 112015118518273-pat00061
.

Figure 112015118518273-pat00062

That is, in Equation 14,

Figure 112015118518273-pat00063
Remember that absolute values are not absolute distance values because they do not reflect actual mobility.

The directional angle deviation is expressed as an angle between two vectors.

Figure 112015118518273-pat00064

That is, it is obtained by the following equation (15).

In Equation (15)

Figure 112015118518273-pat00065
Is the vector between the first and last points,
Figure 112015118518273-pat00066
: Represents a vector between the first and subsequent points.

Figure 112015118518273-pat00067

Mobility test with three features.

First, it is assumed that a block that is not the Y-axis moving direction is moved when the expression (16) is satisfied.

Also, for a block moving in the Y-axis direction, although the movement distance is small (however, it must be larger than a certain threshold value), it is regarded as a moving object if the number of history is significantly increased.

That is, Equation 17 is obtained.

Figure 112015118518273-pat00068

In the case of the present invention, the above four threshold values

Figure 112015118518273-pat00069
Is dynamically determined according to the number of temporary target histories and the engine image size.

By separating the foreground, background, and difference of the input image into frames by frame and dynamically discriminating the difference between each frame, the object in the image can be tracked.

.

Claims (5)

An image input unit for receiving input image data and then pre-processing the image data by applying a Gaussian filter;
A background data detector for detecting an input image data received by the image input unit as background data when an image is divided into parts and a velocity vector in each part is smaller than a predetermined threshold value;
A position table of a portion of the input image data received by the image input unit
Figure 112018087126647-pat00074
The velocity vector at
Figure 112018087126647-pat00075
An object data detector for generating object data when the calculated velocity value is greater than a threshold value set by the velocity vector;
A locus data generation unit for detecting the object data generated by the object data detection unit in the two-dimensional input image data to calculate an object size, a moving distance, and a moving direction, and generating the object object data as three-dimensional object locus data; And
An object tracking module for tracking an object through object trajectory data generated by the trajectory data generator;
And a monitoring unit for comparing the object sign data generated by the sign data generating unit with the protection zone data, which is a predetermined protection zone.
◈ Claim 2 is abandoned due to payment of registration fee. 2. The apparatus of claim 1, wherein the object data detector comprises:
And an object is detected by excluding background data from the input image data.
◈ Claim 3 is abandoned due to the registration fee. The apparatus according to claim 1,
And a noise removal module for removing noise of object data by using an image filter when analyzing object data.
◈ Claim 4 is abandoned due to the registration fee. The apparatus according to claim 1,
And generating new object data when the moving distance of the object is larger than a preset threshold value.
◈ Claim 5 is abandoned due to the registration fee. The apparatus according to claim 1,
And analyzing the optical flow of the object to generate object trajectory data.

KR1020150171562A 2015-12-03 2015-12-03 Intelligent Image Analysis System using Image Separation Image Tracking KR101958270B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150171562A KR101958270B1 (en) 2015-12-03 2015-12-03 Intelligent Image Analysis System using Image Separation Image Tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150171562A KR101958270B1 (en) 2015-12-03 2015-12-03 Intelligent Image Analysis System using Image Separation Image Tracking

Publications (2)

Publication Number Publication Date
KR20170065301A KR20170065301A (en) 2017-06-13
KR101958270B1 true KR101958270B1 (en) 2019-03-14

Family

ID=59218982

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150171562A KR101958270B1 (en) 2015-12-03 2015-12-03 Intelligent Image Analysis System using Image Separation Image Tracking

Country Status (1)

Country Link
KR (1) KR101958270B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101878617B1 (en) * 2017-12-19 2018-07-13 부산대학교 산학협력단 Method and system for processing traictory data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100738522B1 (en) * 2004-12-21 2007-07-11 삼성전자주식회사 Apparatus and method for distinction between camera movement and object movement and extracting object in video surveillance system
KR100879266B1 (en) * 2008-04-25 2009-01-16 이상석 Object tracing and intrusion sensing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101048045B1 (en) * 2009-05-07 2011-07-13 윈스로드(주) Obstacle Image Detection Device and Its Control Method in Dangerous Area of Railroad Crossing Using Moving Trajectory of Object
KR101467352B1 (en) * 2013-04-10 2014-12-11 주식회사 휴먼시스템 location based integrated control system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100738522B1 (en) * 2004-12-21 2007-07-11 삼성전자주식회사 Apparatus and method for distinction between camera movement and object movement and extracting object in video surveillance system
KR100879266B1 (en) * 2008-04-25 2009-01-16 이상석 Object tracing and intrusion sensing system

Also Published As

Publication number Publication date
KR20170065301A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
US10452931B2 (en) Processing method for distinguishing a three dimensional object from a two dimensional object using a vehicular system
US11195038B2 (en) Device and a method for extracting dynamic information on a scene using a convolutional neural network
Ahmed et al. A robust features-based person tracker for overhead views in industrial environment
Son et al. Integrated worker detection and tracking for the safe operation of construction machinery
CA2692424C (en) System and process for detecting, tracking and counting human objects of interest
Nieto et al. Road environment modeling using robust perspective analysis and recursive Bayesian segmentation
Kim et al. Real-time vision-based people counting system for the security door
CN110264495B (en) Target tracking method and device
Shafie et al. Motion detection techniques using optical flow
WO2014092552A2 (en) Method for non-static foreground feature extraction and classification
CN108596157B (en) Crowd disturbance scene detection method and system based on motion detection
KR101681104B1 (en) A multiple object tracking method with partial occlusion handling using salient feature points
Meshram et al. Traffic surveillance by counting and classification of vehicles from video using image processing
Uribe et al. Video based system for railroad collision warning
Makino et al. Moving-object detection method for moving cameras by merging background subtraction and optical flow methods
Dhulavvagol et al. Vehical tracking and speed estimation of moving vehicles for traffic surveillance applications
CN102013007A (en) Apparatus and method for detecting face
KR101958270B1 (en) Intelligent Image Analysis System using Image Separation Image Tracking
CN103093481A (en) Moving object detection method under static background based on watershed segmentation
JP4575315B2 (en) Object detection apparatus and method
KR101958247B1 (en) 3D Virtual Image Analysis System using Image Separation Image Tracking Technology
Dave et al. Statistical survey on object detection and tracking methodologies
Silar et al. Utilization of directional properties of optical flow for railway crossing occupancy monitoring
Vella et al. Improved detection for wami using background contextual information
Krishna et al. Automatic detection and tracking of moving objects in complex environments for video surveillance applications

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
N231 Notification of change of applicant
E701 Decision to grant or registration of patent right