WO2017078213A1 - Method for detecting moving object in photographed image, and boarding and alighting accident prevention system using same - Google Patents

Method for detecting moving object in photographed image, and boarding and alighting accident prevention system using same Download PDF

Info

Publication number
WO2017078213A1
WO2017078213A1 PCT/KR2015/013119 KR2015013119W WO2017078213A1 WO 2017078213 A1 WO2017078213 A1 WO 2017078213A1 KR 2015013119 W KR2015013119 W KR 2015013119W WO 2017078213 A1 WO2017078213 A1 WO 2017078213A1
Authority
WO
WIPO (PCT)
Prior art keywords
light flow
vehicle
region
frequency
area
Prior art date
Application number
PCT/KR2015/013119
Other languages
French (fr)
Korean (ko)
Inventor
양승한
박성령
Original Assignee
경북대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 경북대학교 산학협력단 filed Critical 경북대학교 산학협력단
Publication of WO2017078213A1 publication Critical patent/WO2017078213A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60NSEATS SPECIALLY ADAPTED FOR VEHICLES; VEHICLE PASSENGER ACCOMMODATION NOT OTHERWISE PROVIDED FOR
    • B60N5/00Arrangements or devices on vehicles for entrance or exit control of passengers, e.g. turnstiles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms

Definitions

  • the present invention is to prevent accidents when getting on and off the vehicle, in particular by comparing the surrounding images of the vehicle to detect the moving object based on the light flow derived by the moving object, to more accurately guide the dangerous state around the vehicle
  • the present invention relates to a method for detecting a moving object in a captured image and a system for preventing getting on and off a vehicle using the same.
  • the driver visually checks the surroundings of the vehicle through an external side mirror and an interior room mirror provided at the left / right side of the vehicle while driving or parking to prepare for an accident.
  • cameras are installed in vehicles to facilitate the driver's monitoring of the surroundings of the vehicle from the inside of the vehicle, as well as dangerous situations such as people approaching the vehicle.
  • a safety device in a vehicle that performs a function such as generating an alarm sound inside the vehicle.
  • the safety device compares the previous photographed image and the current photographed image provided by the camera installed in the vehicle to determine the state of the surroundings of the vehicle, particularly moving objects such as various transport means such as a moving vehicle or a motorcycle. .
  • the safety device determines the moving object using only the change of the captured image, there is a limit that it is possible to guide the dangerous state of the surrounding situation only when the vehicle is stopped.
  • the present invention was created in view of the above circumstances, and compares the surrounding images of the vehicle to verify the same object based on the light flow of the changed object, and the light flow map corresponding to the movement change between the corresponding object feature points.
  • the technical purpose is to provide a vehicle unloading accident prevention system used.
  • the present invention is to determine the current state of the vehicle based on the light flow distribution region of the light flow map, and to determine the moving object corresponding to the state of the vehicle based on the light flow distribution region for the preset region of interest. Accordingly, another technical object is to provide a method for detecting a moving object in a captured image and a vehicle getting on and off accident prevention system using the same, which can recognize a dangerous situation regardless of the state of the vehicle.
  • a photographing apparatus installed in the vehicle for photographing the outer periphery of the vehicle, a notification device for outputting accident prevention notification information, and an object in the image frame provided from the photographing apparatus Detects the feature points for, generates a light flow vector corresponding to the movement of the feature points in the current image frame and the previous image frame, discretizes the light flow vector with respect to the size and angle,
  • a vehicle getting on and off accident prevention system comprising a control device for controlling to transmit the alarm information corresponding to determine that the moving object exists through the notification device.
  • the control apparatus may further include a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module.
  • a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module.
  • the light flow vector including the size corresponding to the moving change distance of the object and the angle corresponding to the moving direction.
  • a light flow vector generation module for generating a light flow map and a light flow map having a light flow size axis and an angular axis based on the light flow vector provided from the light flow vector generation module
  • An optical flow map generation module for calculating a flow frequency and an optical flow map generated by the optical flow generation module
  • the light flow vector generation module is provided with a vehicle getting on and off accident prevention system, characterized in that configured to generate a light flow vector from the image of the upper line of view and the light flow coordinate change processing.
  • the light flow vector generation module is provided with a vehicle unloading accident prevention system, characterized in that configured to set only the light flow vector of the light flow vector of the size of the light flow vector less than a predetermined effective reference pixel as the effective light flow vector.
  • the motion analysis module determines the stop, forward and backward state of the vehicle on the basis of the light frequency region of the maximum frequency in the light flow map, but the minimum size of the light flow in the light flow map consisting of the size axis and the angular axis Set the area less than the pixel as the stationary state area, set the area where the change of angle as the minimum reference angle range is small, and set the area where the change of angle as the maximum reference angle range as the reverse state area.
  • a vehicle unloading accident prevention system characterized in that configured to determine the movement state of the vehicle.
  • the light flow map generation module generates a first light flow map including light flow frequency for each region based on the light flow vector for the entire image frame, and the light flow vector for the region of interest preset in the image frame. Generate a second light flow map including light frequency by region based on the second light flow frequency; and the motion analysis module analyzes the first light flow map to determine a stop, forward, and reverse states of the vehicle, and Provided is a vehicle unloading accident prevention system, characterized in that the light flow map is analyzed to determine the existence of a moving object.
  • the motion analysis module may be configured to determine that the motion object exists when the highest frequency region in the first light flow map and the highest frequency region in the second light flow map are different from each other when the motion object is not detected.
  • a vehicle unloading accident prevention system is provided.
  • the motion analysis module may cause an object to be jammed if the light flow continues to be detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range in the light flow map.
  • the photographing device is installed in the vehicle for photographing the outer periphery of the vehicle, vehicle state information providing means for providing the current state of the vehicle, accident prevention notification A notification device for outputting information, a feature point for the object in the region of interest of the captured image provided from the imaging apparatus, and a light flow vector corresponding to the change in the feature point movement in the region of interest of the current image frame and the previous image frame; And generating a discretized light flow vector with respect to size and angle, and determining that a moving object exists when the maximum frequency in an area other than the vehicle state area corresponding to the vehicle state provided from the vehicle state information providing means is equal to or greater than a reference frequency. Controlling to transmit alarm information corresponding thereto through the notification device
  • the getting on and off the vehicle accident prevention system characterized in that comprises a device is provided.
  • the control apparatus may further include a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module.
  • a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module.
  • the light flow vector including the size corresponding to the moving change distance of the object and the angle corresponding to the moving direction.
  • a light flow vector generation module for generating a light flow map and a light flow map having a light flow size axis and an angular axis based on the light flow vector provided from the light flow vector generation module
  • An optical flow map generation module for calculating a flow frequency and an optical flow map generated by the optical flow generation module
  • a vehicle unloading accident prevention system comprising a motion analysis module for determining that a moving object exists when the maximum light flow frequency in a region other than the region corresponding to the current state of the vehicle is higher than the reference frequency.
  • a fourth step a moving object is detected in the captured image characterized in that the configuration including the fifth step of detecting the moving object is an optical flow frequency in the fourth step on
  • the method of detecting a motion object in the photographed image may include performing an upward gaze coordinate transformation process on the image including the light flow information and generating an upward gaze light flow vector from the upward gaze image. Is provided.
  • the motion object in the photographed image may be configured to set only the light flow vector having the size of the light flow vector less than the preset effective reference pixel as the effective light flow vector and then apply to the detection of the motion object.
  • a detection method is provided.
  • the third step generates a first light flow map of the light flow vector of the entire image frame, and generates a second light flow map based on the light flow vector of the region of interest preset in the image frame.
  • the fourth step calculates the light flow frequency for each region for each of the first and second light flow maps, and the fifth step is the maximum light flow frequency calculated by the first light flow map.
  • the fifth step may include the vehicle calculated by the first light flow map when an area in which the light flow frequency calculated by the second light flow map is equal to or greater than a reference frequency does not exist in an area other than the vehicle state area.
  • a moving object detection method in a captured image is provided that determines whether a moving object exists in a moving object determination region corresponding to a state.
  • the region in which the light flow is less than the reference pixel is set as the stationary region in the first light flow map including the size axis and the angle axis, and the area having the smallest change in angle is moved as the minimum reference angle range.
  • a moving object detection method in a captured image characterized in that it is set to a state area, and the area having a large change in angle as a maximum reference angle range is set as a reverse state area to determine a current state of the vehicle.
  • the motion object when the motion object is not detected, it is determined that the motion object exists when the highest frequency region in the first light flow map and the highest frequency region in the second light flow map are different from each other.
  • a moving object detection method in a captured image is provided.
  • the jammed object is generated when the light flow continues to be detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range.
  • the feature point for the object in the 51st step of receiving the current state of the vehicle from the vehicle, and the region of interest of the captured image provided from the imaging device installed in the vehicle In step 52, the ROI feature points of the previous image frame are estimated based on the ROI feature points of the current image frame detected in step 52, and the moving direction is based on the positional relationship between the ROI feature points of the two image frames.
  • the light flow frequency of the predetermined area is calculated from the light flow map generated in step 54 and the light is emitted from an area other than the area corresponding to the vehicle state provided in step 51 from step 55.
  • a motion object detection method in a captured image is provided, comprising the step 56, determining that there is a motion object when a region having a flow frequency greater than or equal to the reference frequency exists.
  • the stop state is set to an area where the magnitude of the light flow is less than the reference pixel in the light flow map, and the forward state is set to an area having a small change in angle as a preset minimum reference angle range.
  • the present invention by comparing the surrounding image of the vehicle and verifying the same object based on the light flow of the changed object by detecting the moving object, it is more accurate to recognize the moving object around the vehicle to accurately guide the danger state accordingly I can give it.
  • the current vehicle state is determined based on the light flow distribution region of the light flow map, and the moving object corresponding to the state of the vehicle is determined based on the light flow distribution region for the preset ROI.
  • FIG. 1 is a view showing a schematic configuration of a vehicle unloading accident prevention system according to the present invention.
  • FIG. 2 is a diagram illustrating the internal structure of the control device 400 shown in FIG. 1 by function.
  • FIG. 3 is a view for explaining a feature point detection operation in the feature point detection module 410 shown in FIG.
  • FIG. 4 is a view for explaining an operation of generating a light flow vector of the light flow vector generating module 420 shown in FIG.
  • FIG. 5 is a diagram illustrating a light flow map generated by the light flow map generation module 430 shown in FIG. 2.
  • 6 to 9 are views for explaining a motion state determination operation in the motion object analysis module 440 shown in FIG.
  • FIGS. 10 and 11 are flowcharts illustrating a method of detecting a motion object in a captured image according to the present invention.
  • FIG. 1 is a view showing a schematic configuration of a vehicle unloading accident prevention system according to the present invention.
  • the vehicle getting on and off accident prevention system includes a photographing apparatus 100, an alarm device 200, a data memory 300, and a control device 400.
  • the photographing apparatus 100, the alarm apparatus 200, the data memory 300, and the control apparatus 400 are installed at a predetermined position of the vehicle.
  • the photographing apparatus 100 is installed in a vehicle and photographs the outer periphery of the vehicle, and may be configured as a camera.
  • the photographing apparatus 100 is installed at an appropriate position outside the vehicle, for example, a side mirror or an upper side of the door so as to photograph the side of the door of the vehicle.
  • the photographing region of the photographing apparatus 100 may be set to include a region of interest (ROI), that is, a danger region.
  • ROI region of interest
  • the dangerous area may be set to an appropriate area in consideration of the type of vehicle or the getting on and off environment, and may be set to a plurality of different areas according to the installation position of the photographing apparatus 100.
  • the danger zone may be set to an area including a door and a front wheel and a rear wheel of the vehicle.
  • the alarm device 200 may be configured as, for example, a buzzer or a speaker as a device for outputting an alarm sound or a guide sound based on the information provided from the control device 400.
  • the alarm device 200 may be installed inside or outside the vehicle. In this case, the alarm device 200 is installed in the interior of the vehicle to recognize the accident prevention notification state to the driver and passengers located in the room, and the outside of the vehicle is installed to the passengers who get on and off the vehicle through the door of the accident prevention This is to recognize the notification status.
  • the data memory 300 stores pixel position information of the ROI in the captured image and various reference information for analyzing the motion object.
  • the reference information includes effective reference size information for setting an effective light flow vector applied to motion analysis, a reference frequency for determining a moving object, and a minimum reference angle range / maximum reference angle range / minimum reference for determining a motion state region. Pixel range is included.
  • the reference angle range is a range for determining the forward and backward movement of the vehicle
  • the reference pixel range is a range for determining the stop of the vehicle.
  • the control device 400 determines a moving object around the vehicle door regardless of the movement of the vehicle based on the captured image provided from the photographing apparatus 100, and provides alarm information corresponding to the moving object based on the determination result of the moving object. It is configured to output through the alarm device 200.
  • FIG. 2 is a diagram illustrating the internal structure of the control device 400 shown in FIG. 1 by function.
  • the control device 400 includes a feature point detection module 410, an optical flow vector generation module 420, an optical flow map generation module 430, and a motion object analysis module 440. It is composed.
  • the feature point detection module 410 detects a feature point for the current image frame provided from the photographing means 100.
  • the feature point detection module 410 may detect corner feature points for all objects O in the corresponding image frame, as shown in FIG. 3A, using a known Harris Corner detector. . In this case, the feature point detection may be performed after the image is reduced to improve the feature point detection speed.
  • 3B is a diagram illustrating a photographed image in which feature points are displayed, wherein a red circle represents a feature point.
  • the light flow vector generation module 420 analyzes the positional relationship of the corner feature points of the object detected by the feature point detection module 410 to obtain light flow information corresponding to the moving direction, and generates a light flow vector based on this. do.
  • the light flow vector generation module 420 estimates a point corresponding to a feature point of the current image frame with respect to a previous or subsequent image frame by using the Pyramidal Lucas Kanade method.
  • the light flow vector generation module 420 is an object in two frames, for example, the object O 2 in the current frame and the object O 1 in the previous frame, as shown in FIG.
  • the position change of each feature point corresponding to each other is detected, and an optical flow having a movement direction corresponding to the position change is obtained.
  • the light flow for the stationary object becomes a "0" vector.
  • the light flow vector generation module 420 converts the obtained light flow into the light flow in the upward gaze image in order to remove the effects of radiation and perspective distortion.
  • a fisheye lens camera generally used is output as a photographed image including perspective distortion according to a camera posture and radiation distortion according to lens characteristics as shown in 1 in FIG. Accordingly, the light flow vector generation module 420 may generate the light flow in a state in which the photographed image including the distortion component in the form of 1 is converted into an upward gaze image in the form of 2 using a preset correction coefficient. . In this case, it is also possible to convert only the light flow information into an upward gaze image.
  • the light flow vector generation module 420 transforms coordinates of the start and end points of the light flow by using an upward gaze transformation algorithm by using a preset correction parameter, thereby generating a new image in an upward gaze image such as a bird's eye view.
  • the light flow vector OV may be generated as shown in FIG. 4B.
  • the light flow vector generation module 420 may emit in the process of calculating the light flow information. Therefore, the light flow vector generation module 420 is preferably limited to the range in which the light flow information may be physically moved. Do.
  • the light flow vector generation module 420 is effective only for the light flow vector having the magnitude of the light flow vector generated for all the feature points including the change occurrence object, that is, the pixel difference is less than or equal to the preset effective reference size, "30 pixels". Can be set as light flow vector.
  • the light flow map generation module 430 generates a light flow map to be used to determine the state of the vehicle and the movement of the object.
  • the first light flow map for determining the state of the vehicle is generated based on the light flow vector in the entire image frame
  • the second light flow map for determining whether the object is moved is determined in the preset ROI. Generated based on the light flow vector.
  • the light flow map generation module 430 discretizes the effective light flow vector in the light flow vector generation module 420 so as to correspond to the size and angle, as shown in FIG. 5A or 5B.
  • the first and second light flow maps are generated through the light flow information calculation process. That is, the light flow map generation module 430, for example, for the three light flows of (1,0) (1,2) (1,3), as shown in FIG. A table of sizes and angles for each coordinate is generated, and a light flow distribution chart in a graph form for sizes and angles is generated as shown in FIG. 5A and 5B show data discretized with a size of "0.5" pixels and an angle of "1 °".
  • the motion object analysis module 440 determines a motion state of the vehicle on the basis of the light flow frequency for each motion state region in the first light flow map generated by the light flow map generation module 430. The existence of the moving object is determined based on the second light flow map.
  • the motion analysis module 440 first analyzes the current light state of the vehicle by analyzing the first light flow map. That is, the motion analysis module 440 calculates the frequency of the light flow for each state region as shown in FIG. 6 and stops, forwards, and reverses according to the distribution region having the maximum frequency. Analyze with (BACKWARD). In the stopped state, the light flow becomes a "0" vector because most feature point positions remain unchanged when the vehicle is stopped. In other words, the stationary region may be set to a relatively small region, for example, an region in which the size of the light flow vector is less than "1" pixel.
  • the advanced state region may be set as a region having a relatively small change in angle, that is, a region having a predetermined minimum reference angle range, for example, an area having an angle range of "-10 to +10".
  • the backward state region may be set as a region having a relatively large change in angle, that is, a region having a maximum reference angle range, for example, a region having an angle range of "-170 to -180" and "+170 to +180". That is, the motion analysis module 440 determines that the current vehicle is in the stopped state when the frequency of light flow in the stationary state region is the highest in the current photographed image.
  • the motion analysis module 440 determines that a motion object exists when there is an area in the second light flow map other than the current state of the vehicle in which the local light flow maximum frequency is greater than or equal to a preset reference frequency. For example, when the vehicle is in a stopped state, it is determined that a moving object exists when a maximum frequency exists in an area other than the stopped state area in the second light flow map.
  • the motion analysis module 440 is configured to provide the accident prevention alarm information corresponding to the alarm device 200 based on the detection result of the above-described moving object.
  • the motion analysis module 440 calculates a first frequency Total corresponding to the vehicle state in the first light flow map in FIG. 7, and a second frequency ROI corresponding to the motion object in the second light flow map. ), It is determined that a moving object exists when there is an area having the second highest frequency in an area other than the area having the highest first frequency. That is, in Fig. 6B, the vehicle state area is "X”, and the moving object area is "Y", which is the second frequency occurrence area having the highest frequency in areas other than the vehicle state area.
  • the motion analysis module 440 may be configured to determine that an object is jammed in the vehicle. In this case, when the object is caught in the vehicle, the object moves with it. Accordingly, the motion analysis module 440 considers this and the light flow in the vertical direction as shown in FIG. If it appears continuously for more than a certain time, it is determined that the jammed object has occurred.
  • the state of the vehicle is determined by using the light flow map in the entire image frame, but the state of the vehicle may be provided by the apparatus of the vehicle itself. It is possible.
  • the current state of the vehicle may be provided through vehicle state information providing means (not shown) provided in the vehicle. Therefore, according to the present invention, as described above, only the region of interest preset in the image frame may be generated to determine the motion object by generating the light flow map. In this case, a process of generating and analyzing a light flow map for determining a vehicle state may be omitted, and thus the processing speed may be improved when using the same system.
  • a moving object exists by first determining a moving object in the ROI around the vehicle through the second light flow map, an alarm for the moving object is not determined without determining the state of the vehicle.
  • the processing speed can be further improved when using the same system.
  • the vehicle state is determined through the first light flow map, and then an alarm is detected for the moving object by checking whether a moving object exists in an area corresponding to the vehicle state. It is configured to provide.
  • the current state information of the vehicle may be provided through vehicle state information providing means (not shown) provided in the vehicle, and if a moving object does not exist with respect to the current state of the vehicle, it determines the jammed object and provides an alarm. Can be done.
  • the photographing apparatus 100 provides a photographed image to the control apparatus 400 at a predetermined cycle.
  • the photographing apparatus 100 may be installed above the door of the vehicle, but the photographing apparatus may be installed to include a dangerous area around the vehicle.
  • the control device 400 detects a feature point of the object using a Harris corner detector in the current image frame (ST10).
  • the control device 400 estimates a feature point corresponding to the corner feature point of the current image frame from the previous image frame, and obtains light flow information corresponding to the positional change between the corner feature points in the two image frames (ST20). That is, as shown in (A) of FIG. 4, the control device 400 has a light flow corresponding to a position change between a corner feature point in the current image frame and a corner feature point in the previous image frame corresponding to the object. Generate information.
  • the control device 400 converts an image including light flow information into an upward line of sight image and generates a light flow vector corresponding to the light flow information in the upward line of sight image (ST30 and ST40). That is, the control device 400 generates a light flow vector including a size corresponding to the moving distance and angle information corresponding to the moving direction in the light flow information of the object. At this time, the control device 400 may set only the light flow vector of which the size is less than a predetermined size reference pixel, for example, “30” pixels, among the light flow vectors as the effective light flow vector. That is, the light flow vector larger than the reference pixel size is cleared in the corresponding upward line image.
  • the upward gaze light flow vector may be generated by coordinate transformation so that only the light flow vector information corresponds to the upward gaze.
  • the controller 400 generates a light flow map for the size and angle of each light flow vector based on the effective light flow vector (ST50). That is, the controller 400 generates a light flow table as shown in FIG. 5A, and generates a light flow distribution chart in a graph form as shown in FIG. 5B so as to correspond thereto. do. In this case, the control device 400 generates a first light flow map of light flow frequency for each region based on the light flow vector for the entire image frame, and also the light flow vector for the region of interest preset in the image frame. Generate a second light flow map consisting of the light flow frequency for each region based on the.
  • control device 400 analyzes the first light flow map to calculate the light flow frequency for each preset vehicle state region, and determines the current state of the vehicle based on this (ST60). That is, the control device 400 determines the stationary state, the forward state and the reverse state of the vehicle based on the light flow frequency in the entire image as shown in FIG. 6.
  • control device 400 determines the presence or absence of a moving object by checking the light flow frequency in the second light flow map for the preset ROI (ST70). That is, the control device 400 determines that the moving object exists when the maximum frequency is greater than or equal to the reference frequency in the remaining regions other than the current vehicle state region using only the light flow vector existing in the preset ROI.
  • control device 400 additionally determines whether there is a moving object moving in the same manner as the vehicle, or determines whether there is a jammed object in the vehicle.
  • the moving object is determined using the second light flow map.
  • the second light flow map is applied to the second light flow map. If there is no area in the area other than the vehicle state area in which the light flow frequency calculated by the first frequency is greater than or equal to the vehicle state area, there is a motion object for the motion object determination area corresponding to the vehicle state calculated by the first light flow map. It is also possible to implement to judge.
  • control device 400 uses the second light flow map to use the second light flow map other than the vehicle state determination region shown in FIG. 9. It is determined whether an area motion object exists at step ST110.
  • step ST110 If it is determined in step ST110 that the moving object exists, the control device 400 outputs accident prevention alarm information (ST120).
  • step ST110 determines whether the moving object does not exist. If it is determined in step ST110 that the moving object does not exist, the control device 400 analyzes the current state of the vehicle using the first light flow map (ST130).
  • control device 400 re-confirms whether the object is a moving object using the first and second light flow maps corresponding to the state of the vehicle analyzed in step ST130.
  • step ST130 when it is determined in step ST130 that the current vehicle is in a stopped state, the control device 400 has an accident when a moving object exists in an area except for the vehicle stopping area ((a) of FIG. 6) in the second light flow map. Output the preventive alarm information (ST140).
  • control device 400 determines that the current vehicle is in the advanced state at step ST130, when the moving object exists in an area other than the vehicle forward area (b) of the second light flow map, The accident prevention alarm information is output (ST150).
  • control device 400 when it is determined that the current vehicle is in the reverse state in step ST130, the control device 400 has an accident when there is a moving object in an area other than the vehicle reverse area ((c) of FIG. 6) in the second light flow map. Output the preventive alarm information (ST160).
  • the control device 400 when the moving object is not present in the area other than the vehicle forward area or the vehicle reverse area in the step ST150 and ST160, if the jammed object is confirmed by checking whether the vehicle jammed object exists, Output the preventive alarm information (ST170). At this time, the control device 400, as shown in FIG. If the error persists, it is determined that a jammed object has occurred.
  • the embodiment it is detected whether a moving object exists through comparison of the captured image, but the surrounding of the vehicle regardless of the state of the vehicle by using the light flow vector obtained by the change of movement with respect to the moving object in the captured image It is possible to more accurately detect the motion object detection of.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Acoustics & Sound (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)

Abstract

The present invention relates to a method for detecting a moving object in a photographed image, which is capable of giving notification of a dangerous situation surrounding a vehicle more accurately, and a boarding and alighting accident prevention system using the same. The boarding and alighting accident prevention system comprises: a photographing device, installed on a vehicle, for the external surroundings of the vehicle; a notification device for outputting accident prevention alert information; and a control unit configured to detect a feature point of an object in an image frame provided from the photographing device, generate a light flow vector corresponding to a change in the movement of the feature point in the current image frame and a previous image frame, discretize the light flow vector with respect to magnitude and angle, determine that a moving object exists if the maximum frequency in a region other than a predetermined vehicle state determination region is equal to or greater than a reference frequency, and transmit alarm information corresponding thereto through the notification device.

Description

촬영영상에서의 움직임 객체 검출 방법 및 이를 이용한 차량 승하차 사고 예방 시스템Moving object detection method in photographed image and vehicle getting on and off accident prevention system
본 발명은 차량 승하차시 발생되는 사고를 예방하기 위한 것으로, 특히 차량의 주변 촬영영상을 비교하여 이동 객체에 의해 도출되는 광흐름을 근거로 움직임 객체를 검출함으로써, 차량 주변의 위험상태를 보다 정확하게 안내해 줄 수 있도록 해 주는 촬영영상에서의 움직임 객체 검출 방법 및 이를 이용한 차량 승하차 사고 예방 시스템에 관한 것이다. The present invention is to prevent accidents when getting on and off the vehicle, in particular by comparing the surrounding images of the vehicle to detect the moving object based on the light flow derived by the moving object, to more accurately guide the dangerous state around the vehicle The present invention relates to a method for detecting a moving object in a captured image and a system for preventing getting on and off a vehicle using the same.
일반적으로 운전자는 차량 주행시나 주차시 차량의 좌/우측에 구비된 외부 사이드 미러와 내부 룸 미러를 통해 차량의 주변 상황을 육안으로 확인하여 사고발생에 대비한다.In general, the driver visually checks the surroundings of the vehicle through an external side mirror and an interior room mirror provided at the left / right side of the vehicle while driving or parking to prepare for an accident.
그러나, 운전자가 외부 사이드 미러와 내부 룸 미러를 통해 차량 주변을 파악할 수 있는 영역은 한정되어 있을 뿐 아니라, 찰나의 부주의로 차량 주변 상황을 인식하지 못함으로 인해 대형 사고를 유발하는 문제가 사회적으로 이슈화되고 있는 것이 현실이다.However, not only are the areas where the driver can see the surroundings of the vehicle through the exterior side mirrors and the interior room mirrors, but the problem of causing a large accident due to the inadvertent recognition of the situation around the vehicle is a social issue. It is a reality.
특히, 어린이를 태운 통학차량의 운행이 급속히 확산되고 있고, 어린이가 승하차하는 과정에서 발생되는 사망사고는 피해자가 어린이라는 점에서 사회적 파장이 매우 크게 작용하고 있다.In particular, the operation of school vehicles carrying children is rapidly spreading, and the social accidents are acting very largely because the victims are young when death occurs while children are getting on and off.
이에, 최근에는 통학차량 관련 법규를 강화함은 물론, 차량에 카메라를 설치하여 운전자가 차량 내부에서 차량의 외부 주변 상황을 용이하게 감시하도록 함과 더불어, 차량 주변에 사람이 접근하는 등의 위험상황이 발생되는 경우에는 차량 내부로 경보음을 발생시키는 등의 기능을 수행하는 안전장치를 차량에 기본적으로 설치하는 것이 제안되고 있다.In recent years, as well as tightening regulations regarding school vehicles, cameras are installed in vehicles to facilitate the driver's monitoring of the surroundings of the vehicle from the inside of the vehicle, as well as dangerous situations such as people approaching the vehicle. When generated, it is proposed to basically install a safety device in a vehicle that performs a function such as generating an alarm sound inside the vehicle.
상기한 안전장치는 차량에 설치된 카메라를 통해 제공되는 이전 촬영영상과 현재 촬영영상과의 비교를 수행하여 차량 주변의 상태, 특히 움직이는 주변 차량이나 오토바이 등의 각종 이송 수단 등의 움직임 객체를 판단하게 된다. The safety device compares the previous photographed image and the current photographed image provided by the camera installed in the vehicle to determine the state of the surroundings of the vehicle, particularly moving objects such as various transport means such as a moving vehicle or a motorcycle. .
그러나, 촬영영상의 변화여부만을 이용하여 움직임 객체를 판단함에 있어서는 차량이 이동중인 상태에서는 차량 주변에 주변 움직임 객체가 존재하지 않더라도 차량 이동으로 인해 촬영영상의 변화가 발생되므로 인해 움직임 객체가 존재한다고 오판하는 경우가 발생될 수 있다. However, in determining the moving object using only the change of the captured image, when the vehicle is moving, even if there is no surrounding moving object around the vehicle, the moving object occurs because the change of the captured image occurs due to the movement of the vehicle. May occur.
즉, 상기한 안전장치는 촬영영상의 변화여부만을 이용하여 움직임 객체를 판단하므로, 차량이 정차된 경우에 한하여 주변 상황에 대한 위험상태 안내가 가능하다는 한계가 있다. That is, since the safety device determines the moving object using only the change of the captured image, there is a limit that it is possible to guide the dangerous state of the surrounding situation only when the vehicle is stopped.
이에, 본 발명은 상기한 사정을 감안하여 창출된 것으로, 차량의 주변 영상을 비교하여 변화 발생한 객체에 대한 광흐름을 근거로 동일 객체를 검증하고, 해당 객체 특징점간 이동변화에 대응되는 광흐름 맵을 생성하여 이 광흐름 맵의 분석을 통해 움직임 객체를 판단함으로써, 차량 주변의 움직임 객체 인식을 보다 정확히 수행하여 그에 따른 위험상태를 정확하게 안내해 줄 수 있도록 해 주는 촬영영상에서의 움직임 객체 검출 방법 및 이를 이용한 차량 승하차 사고 예방 시스템을 제공함에 그 기술적 목적이 있다. Accordingly, the present invention was created in view of the above circumstances, and compares the surrounding images of the vehicle to verify the same object based on the light flow of the changed object, and the light flow map corresponding to the movement change between the corresponding object feature points. Method to detect moving objects through the analysis of the light flow map and to more accurately recognize moving objects around the vehicle and guide the dangerous state accordingly. The technical purpose is to provide a vehicle unloading accident prevention system used.
또한, 본 발명은 광흐름 맵의 광흐름 분포 영역을 근거로 현재 차량의 상태를 판단함과 더불어, 기 설정된 관심영역에 대한 광흐름 분포 영역을 근거로 차량의 상태에 대응되는 움직임 객체를 판단하도록 함으로써, 차량의 상태에 상관없이 주변 위험 상황을 인지할 수 있도록 해 주는 촬영영상에서의 움직임 객체 검출 방법 및 이를 이용한 차량 승하차 사고 예방 시스템을 제공함에 또 다른 기술적 목적이 있다. In addition, the present invention is to determine the current state of the vehicle based on the light flow distribution region of the light flow map, and to determine the moving object corresponding to the state of the vehicle based on the light flow distribution region for the preset region of interest. Accordingly, another technical object is to provide a method for detecting a moving object in a captured image and a vehicle getting on and off accident prevention system using the same, which can recognize a dangerous situation regardless of the state of the vehicle.
상기 목적을 달성하기 위한 본 발명의 일측면에 따르면, 차량에 설치되어 차량의 외측 주변을 촬영하는 촬영장치와, 사고예방 알림정보를 출력하는 알림장치 및, 상기 촬영장치로부터 제공되는 영상프레임에서 객체에 대한 특징점을 검출하고, 현재 영상 프레임과 이전 영상 프레임에서의 특징점 이동 변화에 대응되는 광흐름 벡터를 생성하며, 광흐름 벡터를 크기와 각도에 대해 이산화하여 기 설정된 차량 상태 판단 영역 이외 영역에서의 최대 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하여 상기 알림장치를 통해 이에 대응되는 알람정보를 송출하도록 제어하는 제어장치를 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.According to an aspect of the present invention for achieving the above object, a photographing apparatus installed in the vehicle for photographing the outer periphery of the vehicle, a notification device for outputting accident prevention notification information, and an object in the image frame provided from the photographing apparatus Detects the feature points for, generates a light flow vector corresponding to the movement of the feature points in the current image frame and the previous image frame, discretizes the light flow vector with respect to the size and angle, When the maximum frequency is greater than or equal to the reference frequency is provided a vehicle getting on and off accident prevention system comprising a control device for controlling to transmit the alarm information corresponding to determine that the moving object exists through the notification device.
또한, 상기 제어장치는 현재 영상 프레임에서 객체에 대한 코너 특징점을 검출하는 특징점 검출모듈과, 상기 특징점 검출모듈에서 검출된 현재 영상프레임에서의 코너 특징점에 대응되는 이전 또는 이후 영상프레임의 특징점을 추정하고, 두 영상 프레임의 코너 특징점간의 위치 관계를 근거로 이동방향에 대응되는 광흐름을 검출함과 더불어, 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 광흐름 벡터 생성모듈, 상기 광흐름 벡터 생성모듈로부터 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하고, 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 광흐름 맵 생성모듈 및, 상기 광흐름 생성모듈에서 생성된 광흐름 맵에서 기 설정된 상태판단 영역에 이외 영역에서의 최대 광흐름 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하는 움직임 분석모듈을 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.The control apparatus may further include a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module. In addition to detecting the light flow corresponding to the moving direction based on the positional relationship between the corner feature points of the two image frames, the light flow vector including the size corresponding to the moving change distance of the object and the angle corresponding to the moving direction. A light flow vector generation module for generating a light flow map and a light flow map having a light flow size axis and an angular axis based on the light flow vector provided from the light flow vector generation module, An optical flow map generation module for calculating a flow frequency and an optical flow map generated by the optical flow generation module The document group to get on and off the vehicle accident prevention system, characterized in that comprising: a motion analysis module for determining that a moving object, if the maximum frequency of the optical flow in the region other than the less than the reference frequency at the set-state determination area is provided.
또한, 상기 광흐름 벡터 생성모듈은 광흐름 정보가 포함된 영상을 상방시선 좌표변화처리하고, 이 상방시선 영상에서 광흐름 벡터를 생성하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방 시스템이 제공된다.In addition, the light flow vector generation module is provided with a vehicle getting on and off accident prevention system, characterized in that configured to generate a light flow vector from the image of the upper line of view and the light flow coordinate change processing.
또한, 상기 광흐름 벡터 생성모듈은 상기 광흐름 벡터의 크기가 기 설정된 유효 기준 픽셀 미만인 광흐름 벡터만 유효 광흐름 벡터로 설정하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.In addition, the light flow vector generation module is provided with a vehicle unloading accident prevention system, characterized in that configured to set only the light flow vector of the light flow vector of the size of the light flow vector less than a predetermined effective reference pixel as the effective light flow vector.
또한, 상기 움직임 분석모듈은 광흐름 맵에서 최대 빈도수의 광흐름 영역을 근거로 차량의 정차, 전진, 후진 상태를 판단하되, 크기축과 각도축으로 이루어지는 광흐름 맵에서 광흐름의 크기가 최소 기준 픽셀 미만인 영역을 정차상태 영역으로 설정하고, 최소 기준 각도 범위로서 각도의 변화가 적은 영역을 전진상태 영역으로 설정하며, 최대 기준 각도 범위로서 각도의 변화가 큰 영역을 후진상태 영역으로 설정하여, 차량의 움직임 상태를 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.In addition, the motion analysis module determines the stop, forward and backward state of the vehicle on the basis of the light frequency region of the maximum frequency in the light flow map, but the minimum size of the light flow in the light flow map consisting of the size axis and the angular axis Set the area less than the pixel as the stationary state area, set the area where the change of angle as the minimum reference angle range is small, and set the area where the change of angle as the maximum reference angle range as the reverse state area. Provided is a vehicle unloading accident prevention system, characterized in that configured to determine the movement state of the vehicle.
또한, 상기 광흐름 맵 생성모듈은 전체 영상프레임에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제1 광흐름 맵을 생성함과 더불어, 영상프레임에서 기 설정된 관심영역에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제2 광흐름 맵을 생성하도록 구성되고, 상기 움직임 분석모듈은 상기 제1 광흐름 맵을 분석하여 차량의 정차, 전진, 후진 상태를 판단하고, 상기 제2 광흐름 맵을 분석하여 움직임 객체의 존재 여부를 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다. In addition, the light flow map generation module generates a first light flow map including light flow frequency for each region based on the light flow vector for the entire image frame, and the light flow vector for the region of interest preset in the image frame. Generate a second light flow map including light frequency by region based on the second light flow frequency; and the motion analysis module analyzes the first light flow map to determine a stop, forward, and reverse states of the vehicle, and Provided is a vehicle unloading accident prevention system, characterized in that the light flow map is analyzed to determine the existence of a moving object.
또한, 상기 움직임 분석모듈은 움직임 객체가 검출되지 않는 경우, 제1 광흐름 맵에서 최고 빈도수 영역과, 제2 광흐름 맵에서 최고 빈도수 영역이 서로 상이한 경우 움직임 객체가 존재하는 것으로 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방 시스템이 제공된다.The motion analysis module may be configured to determine that the motion object exists when the highest frequency region in the first light flow map and the highest frequency region in the second light flow map are different from each other when the motion object is not detected. A vehicle unloading accident prevention system is provided.
또한, 상기 움직임 분석모듈은 광흐름 맵에서 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에 대해 일정 빈도수 이상의 광흐름이 지속하여 검출되는 경우 끼임 객체가 발생한 것으로 판단하는 것을 특징으로 하는 차량 승하차 사고 예방 시스템이 제공된다.In addition, the motion analysis module may cause an object to be jammed if the light flow continues to be detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range in the light flow map. There is provided a vehicle unloading accident prevention system, characterized in that it is determined to be.
또한, 상기 목적을 달성하기 위한 본 발명의 또 다른 일측면에 따르면, 차량에 설치되어 차량의 외측 주변을 촬영하는 촬영장치와, 차량의 현재 상태를 제공하기 위한 차량 상태정보 제공수단, 사고예방 알림정보를 출력하는 알림장치 및, 상기 촬영장치로부터 제공되는 촬영영상의 관심영역에서 객체에 대한 특징점을 검출하고, 현재 영상 프레임과 이전 영상 프레임의 관심영역에서의 특징점 이동 변화에 대응되는 광흐름 벡터를 생성하며, 광흐름 벡터를 크기와 각도에 대해 이산화하여 상기 차량 상태정보 제공수단으로부터 제공되는 차량 상태에 대응되는 차량 상태 영역 이외의 영역에서의 최대 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하여 상기 알림장치를 통해 이에 대응되는 알람정보를 송출하도록 제어하는 제어장치를 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.In addition, according to another aspect of the present invention for achieving the above object, the photographing device is installed in the vehicle for photographing the outer periphery of the vehicle, vehicle state information providing means for providing the current state of the vehicle, accident prevention notification A notification device for outputting information, a feature point for the object in the region of interest of the captured image provided from the imaging apparatus, and a light flow vector corresponding to the change in the feature point movement in the region of interest of the current image frame and the previous image frame; And generating a discretized light flow vector with respect to size and angle, and determining that a moving object exists when the maximum frequency in an area other than the vehicle state area corresponding to the vehicle state provided from the vehicle state information providing means is equal to or greater than a reference frequency. Controlling to transmit alarm information corresponding thereto through the notification device The getting on and off the vehicle accident prevention system characterized in that comprises a device is provided.
또한, 상기 제어장치는 현재 영상 프레임에서 객체에 대한 코너 특징점을 검출하는 특징점 검출모듈과, 상기 특징점 검출모듈에서 검출된 현재 영상프레임에서의 코너 특징점에 대응되는 이전 또는 이후 영상프레임의 특징점을 추정하고, 두 영상 프레임의 코너 특징점간의 위치 관계를 근거로 이동방향에 대응되는 광흐름을 검출함과 더불어, 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 광흐름 벡터 생성모듈, 상기 광흐름 벡터 생성모듈로부터 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하고, 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 광흐름 맵 생성모듈 및, 상기 광흐름 생성모듈에서 생성된 광흐름 맵에서 차량의 현재 상태에 대응되는 영역 이외 영역에서의 최대 광흐름 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하는 움직임 분석모듈을 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템이 제공된다.The control apparatus may further include a feature point detection module for detecting a corner feature point of an object in a current image frame, and a feature point of a previous or subsequent image frame corresponding to a corner feature point in the current image frame detected by the feature point detection module. In addition to detecting the light flow corresponding to the moving direction based on the positional relationship between the corner feature points of the two image frames, the light flow vector including the size corresponding to the moving change distance of the object and the angle corresponding to the moving direction. A light flow vector generation module for generating a light flow map and a light flow map having a light flow size axis and an angular axis based on the light flow vector provided from the light flow vector generation module, An optical flow map generation module for calculating a flow frequency and an optical flow map generated by the optical flow generation module There is provided a vehicle unloading accident prevention system comprising a motion analysis module for determining that a moving object exists when the maximum light flow frequency in a region other than the region corresponding to the current state of the vehicle is higher than the reference frequency. .
또한, 상기 목적을 달성하기 위한 본 발명의 또 다른 일측면에 따르면,상기 촬영장치로부터 제공되는 촬영영상의 관심영역에서 변화 발생 객체에 대한 특징점을 검출하는 제1 단계와, 상기 제1 단계에서 검출된 현재 영상프레임의 특징점을 근거로 이전 영상프레임의 특징점을 추정하고, 두 영상 프레임의 특징점간 위치 관계를 근거로 이동방향에 대응되는 광흐름을 획득함과 더불어, 획득된 광흐름을 근거로 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 제2 단계, 상기 제2 단계에서 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하는 제3 단계, 상기 제3 단계에서 생성된 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 제4 단계, 상기 제4 단계에서 광흐름 빈도수가 최대인 영역의 위치를 근거로 움직임 객체를 검출하는 제5 단계를 포함하여 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In addition, according to another aspect of the present invention for achieving the above object, a first step of detecting a feature point for the change-occurring object in the region of interest of the captured image provided from the imaging device, and the detection in the first step Estimates the feature points of the previous video frame based on the feature points of the current video frame, obtains the light flow corresponding to the moving direction based on the positional relationship between the feature points of the two video frames, and A second step of generating a light flow vector including a magnitude corresponding to a movement change distance with respect to the object and an angle corresponding to the movement direction; and based on the light flow vector provided in the second step, the light flow magnitude and angle axes Comprising a third step of generating a light flow map consisting of the step of calculating the light flow frequency for a predetermined region in the light flow map generated in the third step A fourth step, a moving object is detected in the captured image characterized in that the configuration including the fifth step of detecting the moving object is an optical flow frequency in the fourth step on the basis of the position of the maximum in area is provided.
또한, 상기 제2 단계는 광흐름 정보가 포함된 영상을 상방시선 좌표변환처리를 수행하고, 이 상방시선 영상에서 상방시선 광흐름 벡터를 생성하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In the second step, the method of detecting a motion object in the photographed image may include performing an upward gaze coordinate transformation process on the image including the light flow information and generating an upward gaze light flow vector from the upward gaze image. Is provided.
또한, 상기 제2 단계는 상기 광흐름 벡터의 크기가 기 설정된 유효 기준 픽셀 미만인 광흐름 벡터만 유효 광흐름 벡터로 설정하여 이후 움직임 객체 검출에 적용하도록 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In the second step, the motion object in the photographed image may be configured to set only the light flow vector having the size of the light flow vector less than the preset effective reference pixel as the effective light flow vector and then apply to the detection of the motion object. A detection method is provided.
또한, 상기 제3 단계는 전체 영상프레임에 대한 광흐름 벡터를 제1 광흐름 맵을 생성함과 더불어, 영상프레임에서 기 설정된 관심영역에 대한 광흐름 벡터를 근거로 제2 광흐름 맵을 생성하고, 상기 제4 단계는 상기 제1 및 제2 광흐름 맵에 대해 기 설정된 영역별 광흐름 빈도수를 각각 산출하며, 상기 제5 단계는 상기 제1 광흐름 맵에 의해 산출된 광흐름 빈도수가 최대인 영역을 근거로 차량의 정차, 전진, 후진 상태를 판단하고, 상기 제2 광흐름 맵에 의해 산출된 광흐름 빈도수가 기준 빈도수 이상인 영역이 상기 차량 상태 영역 이외의 영역에 존재하는 경우, 움직임 객체가 존재한다고 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In addition, the third step generates a first light flow map of the light flow vector of the entire image frame, and generates a second light flow map based on the light flow vector of the region of interest preset in the image frame. The fourth step calculates the light flow frequency for each region for each of the first and second light flow maps, and the fifth step is the maximum light flow frequency calculated by the first light flow map. When the vehicle stops, moves forward, or moves backward based on the area, and the area having the light flow frequency calculated by the second light flow map is greater than or equal to the reference frequency exists in the area other than the vehicle state area, the moving object There is provided a moving object detection method in a captured image, characterized in that it is determined to exist.
또한, 상기 제5 단계는 상기 제2 광흐름 맵에 의해 산출된 광흐름 빈도수가 기준 빈도 수 이상인 영역이 상기 차량 상태 영역 이외의 영역에 존재하지 않는 경우, 제1 광흐름 맵에 의해 산출된 차량 상태에 대응되는 움직임 객체 판단영역에 대한 움직임 객체 존재 여부를 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.The fifth step may include the vehicle calculated by the first light flow map when an area in which the light flow frequency calculated by the second light flow map is equal to or greater than a reference frequency does not exist in an area other than the vehicle state area. A moving object detection method in a captured image is provided that determines whether a moving object exists in a moving object determination region corresponding to a state.
또한, 상기 제5 단계는 크기축과 각도축으로 이루어지는 제1 광흐름 맵에서 광흐름의 크기가 기준 픽셀 미만인 영역을 정차상태 영역으로 설정하고, 최소 기준 각도 범위로서 각도의 변화가 적은 영역을 전진상태 영역으로 설정하며, 최대 기준 각도 범위로서 각도의 변화가 큰 영역을 후진상태 영역으로 설정하여, 차량의 현재 상태를 판단하도록 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In the fifth step, the region in which the light flow is less than the reference pixel is set as the stationary region in the first light flow map including the size axis and the angle axis, and the area having the smallest change in angle is moved as the minimum reference angle range. There is provided a moving object detection method in a captured image, characterized in that it is set to a state area, and the area having a large change in angle as a maximum reference angle range is set as a reverse state area to determine a current state of the vehicle.
또한, 상기 제5 단계는 움직임 객체가 검출되지 않는 경우, 제1 광흐름 맵에서 최고 빈도수 영역과, 제2 광흐름 맵에서 최고 빈도수 영역이 서로 상이한 경우 움직임 객체가 존재하는 것으로 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In the fifth step, when the motion object is not detected, it is determined that the motion object exists when the highest frequency region in the first light flow map and the highest frequency region in the second light flow map are different from each other. A moving object detection method in a captured image is provided.
또한, 상기 제5 단계는 광흐름 맵에서 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에 대해 일정 빈도수 이상의 광흐름이 지속하여 검출되는 경우 끼임 객체가 발생한 것으로 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In the fifth step, the jammed object is generated when the light flow continues to be detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range. There is provided a method of detecting a moving object in a captured image, characterized in that it is determined.
또한, 상기 목적을 달성하기 위한 본 발명의 또 다른 일측면에 따르면, 차량으로부터 차량의 현재 상태를 제공받는 제51 단계와, 차량에 설치된 촬영장치로부터 제공되는 촬영영상의 관심영역에서 객체에 대한 특징점을 검출하는 제52 단계, 상기 제52 단계에서 검출된 현재 영상 프레임의 관심영역 특징점을 근거로 이전 영상프레임의 관심영역 특징점을 추정하고, 두 영상 프레임의 관심영역 특징점간 위치 관계를 근거로 이동방향에 대응되는 광흐름을 획득함과 더불어, 획득된 광흐름을 근거로 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 제53 단계, 상기 제53 단계에서 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하는 제54 단계, 상기 제54 단계에서 생성된 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 제55 단계 및, 상기 제55 단계에서 상기 제51 단계에서 제공되는 차량 상태에 대응되는 영역 이외의 영역에서 광흐름 빈도수가 기준 빈도수 이상인 영역이 존재하는 경우 움직임 객체가 존재한다고 판단하는 제56 단계를 포함하여 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In addition, according to another aspect of the present invention for achieving the above object, the feature point for the object in the 51st step of receiving the current state of the vehicle from the vehicle, and the region of interest of the captured image provided from the imaging device installed in the vehicle In step 52, the ROI feature points of the previous image frame are estimated based on the ROI feature points of the current image frame detected in step 52, and the moving direction is based on the positional relationship between the ROI feature points of the two image frames. A 53th step of generating a light flow vector including an angle corresponding to a magnitude and a moving direction corresponding to a moving change distance with respect to the object based on the light flow corresponding to the obtained light flow; A 54th step of generating a light flow map having a light flow size axis and an angular axis based on the light flow vector provided in step 53; The light flow frequency of the predetermined area is calculated from the light flow map generated in step 54 and the light is emitted from an area other than the area corresponding to the vehicle state provided in step 51 from step 55. A motion object detection method in a captured image is provided, comprising the step 56, determining that there is a motion object when a region having a flow frequency greater than or equal to the reference frequency exists.
또한, 상기 제56 단계에서 정차상태는 광흐름 맵에서 광흐름의 크기가 기준 픽셀 미만인 영역으로 설정하고, 전진상태는 기 설정된 최소 기준 각도 범위로서 각도의 변화가 적은 영역으로 설정하며, 후진상태는 기 설정된 최대 기준 각도 범위로서 각도의 변화가 큰 영역으로 설정하여 차량 상태에 대응되는 영역을 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법이 제공된다.In addition, in the 56th step, the stop state is set to an area where the magnitude of the light flow is less than the reference pixel in the light flow map, and the forward state is set to an area having a small change in angle as a preset minimum reference angle range. There is provided a moving object detection method in a captured image, wherein a region corresponding to a vehicle state is determined by setting a region having a large change in angle as a preset maximum reference angle range.
본 발명에 의하면 차량의 주변 영상을 비교하여 변화 발생한 객체에 대한 광흐름을 근거로 동일 객체를 검증하여 움직임 객체를 검출함으로써, 차량 주변의 움직임 객체 인식을 보다 정확히 수행하여 그에 따른 위험상태를 정확하게 안내해 줄 수 있게 된다.According to the present invention, by comparing the surrounding image of the vehicle and verifying the same object based on the light flow of the changed object by detecting the moving object, it is more accurate to recognize the moving object around the vehicle to accurately guide the danger state accordingly I can give it.
또한, 본 발명에 의하면 광흐름 맵의 광흐름 분포 영역을 근거로 현재 차량의 상태를 판단함과 더불어, 기 설정된 관심영역에 대한 광흐름 분포 영역을 근거로 차량의 상태에 대응되는 움직임 객체를 판단하도록 함으로써, 차량의 상태에 상관없이 차량 주변의 움직임 객체를 용이하게 인식할 수 있게 된다. Further, according to the present invention, the current vehicle state is determined based on the light flow distribution region of the light flow map, and the moving object corresponding to the state of the vehicle is determined based on the light flow distribution region for the preset ROI. By doing so, it is possible to easily recognize the moving object around the vehicle regardless of the state of the vehicle.
도1은 본 발명에 따른 차량 승하차 사고 예방 시스템의 개략적인 구성을 도시한 도면.1 is a view showing a schematic configuration of a vehicle unloading accident prevention system according to the present invention.
도2는 도1에 도시된 제어장치(400)의 내부구성을 기능별로 분리하여 나타낸 도면.FIG. 2 is a diagram illustrating the internal structure of the control device 400 shown in FIG. 1 by function.
도3은 도2에 도시된 특징점 검출모듈(410)에서의 특징점 검출 동작을 설명하기 위한 도면. 3 is a view for explaining a feature point detection operation in the feature point detection module 410 shown in FIG.
도4는 도2에 도시된 광흐름 벡터 생성모듈(420)의 광흐름 벡터 생성동작을 설명하기 위한 도면. 4 is a view for explaining an operation of generating a light flow vector of the light flow vector generating module 420 shown in FIG.
도5는 도2에 도시된 광흐름 맵 생성모듈(430)에서 생성되는 광흐름 맵을 예시한 도면. FIG. 5 is a diagram illustrating a light flow map generated by the light flow map generation module 430 shown in FIG. 2.
도6 내지 도9는 도2에 도시된 움직임 객체 분석모듈(440)에서의 움직임 상태 판단동작을 설명하기 위한 도면. 6 to 9 are views for explaining a motion state determination operation in the motion object analysis module 440 shown in FIG.
도10과 도11은 본 발명에 따른 촬영영상에서의 움직임 객체 검출 방법을 설명하기 위한 흐름도.10 and 11 are flowcharts illustrating a method of detecting a motion object in a captured image according to the present invention.
본 발명에 관한 설명은 구조적 내지 기능적 설명을 위한 실시예에 불과하므로, 본 발명의 권리범위는 본문에 설명된 실시예에 의하여 제한되는 것으로 해석되어서는 아니 된다. 즉, 실시예는 다양한 변경이 가능하고 여러 가지 형태를 가질 수 있으므로 본 발명의 권리범위는 기술적 사상을 실현할 수 있는 균등물들을 포함하는 것으로 이해되어야 한다. 또한, 본 발명에서 제시된 목적 또는 효과는 특정 실시예가 이를 전부 포함하여야 한다거나 그러한 효과만을 포함하여야 한다는 의미는 아니므로, 본 발명의 권리범위는 이에 의하여 제한되는 것으로 이해되어서는 아니 될 것이다.Description of the present invention is only an embodiment for structural or functional description, the scope of the present invention should not be construed as limited by the embodiments described in the text. That is, since the embodiments may be variously modified and may have various forms, the scope of the present invention should be understood to include equivalents capable of realizing the technical idea. In addition, the objects or effects presented in the present invention does not mean that a specific embodiment should include all or only such effects, the scope of the present invention should not be understood as being limited thereby.
이하 첨부된 도면을 참조하여 본 발명의 실시예에 따른 촬영영상에서의 움직임 객체 검출 방법 및 이를 이용한 차량 승하차 사고 예방 시스템을 상세하게 설명하기로 한다. Hereinafter, a method for detecting a moving object in a captured image and a vehicle getting on and off accident prevention system using the same will be described in detail with reference to the accompanying drawings.
도1은 본 발명에 따른 차량 승하차 사고 예방 시스템의 개략적인 구성을 도시한 도면이다. 1 is a view showing a schematic configuration of a vehicle unloading accident prevention system according to the present invention.
도1에 도시된 바와 같이 본 발명에 따른 차량 승하차 사고 예방 시스템은 촬영장치(100)와, 알람장치(200), 데이터메모리(300) 및, 제어장치(400)를 포함하여 구성된다. 이때, 상기 촬영장치(100)와 알람장치(200), 데이터메모리(300) 및, 제어장치(400)는 도시되지는 않았지만, 차량의 일정 위치에 설치된다.As shown in FIG. 1, the vehicle getting on and off accident prevention system according to the present invention includes a photographing apparatus 100, an alarm device 200, a data memory 300, and a control device 400. In this case, although not shown, the photographing apparatus 100, the alarm apparatus 200, the data memory 300, and the control apparatus 400 are installed at a predetermined position of the vehicle.
상기 촬영장치(100)는 차량에 설치되어 차량의 외측 주변을 촬영하기 위한 것으로, 카메라로 구성될 수 있다. 상기 촬영장치(100)는 차량의 도어 측방을 촬영할 수 있도록 차량 외부의 적절한 위치, 예컨대 사이드 미러나 도어 상측에 설치된다. 여기서, 촬영장치(100)의 촬영영역은 기 설정된 관심영역(ROI, Region Of Interest) 즉, 위험 구역을 포함하도록 설정되는 것이 바람직하다. 이때, 상기 위험 구역은 차량의 종류나 승하차 환경 등을 고려하여 적정한 영역으로 설정될 수 있으며, 촬영장치(100)의 설치 위치에 따라 서로 다른 다수의 구역으로 설정될 수 있다. 예컨대, 상기 위험 구역은 차량의 도어와 전방 휠 및 후방 휠을 포함하는 영역으로 설정될 수 있다.The photographing apparatus 100 is installed in a vehicle and photographs the outer periphery of the vehicle, and may be configured as a camera. The photographing apparatus 100 is installed at an appropriate position outside the vehicle, for example, a side mirror or an upper side of the door so as to photograph the side of the door of the vehicle. In this case, the photographing region of the photographing apparatus 100 may be set to include a region of interest (ROI), that is, a danger region. In this case, the dangerous area may be set to an appropriate area in consideration of the type of vehicle or the getting on and off environment, and may be set to a plurality of different areas according to the installation position of the photographing apparatus 100. For example, the danger zone may be set to an area including a door and a front wheel and a rear wheel of the vehicle.
상기 알람장치(200)는 상기 제어장치(400)로부터 제공되는 정보를 근거로 사고 예방을 위한 경보음을 출력하거나 안내음성을 출력하기 위한 장치로서 예컨대, 부저 또는 스피커로 구성될 수 있다. 상기 알람장치(200)는 차량의 실내 또는 외측에 설치되어 질 수 있다. 이때, 상기 알람장치(200)가 차량의 실내에 설치됨은 사고예방 알림상태를 운전자 및 실내에 위치하는 승객에게 인식시키기 위함이고, 차량의 외측에 설치됨은 차량의 도어를 통해 승하차 하는 승객에게 사고예방 알림상태를 인식시키기 위함이다.The alarm device 200 may be configured as, for example, a buzzer or a speaker as a device for outputting an alarm sound or a guide sound based on the information provided from the control device 400. The alarm device 200 may be installed inside or outside the vehicle. In this case, the alarm device 200 is installed in the interior of the vehicle to recognize the accident prevention notification state to the driver and passengers located in the room, and the outside of the vehicle is installed to the passengers who get on and off the vehicle through the door of the accident prevention This is to recognize the notification status.
상기 데이터메모리(300)는 촬영영상에서의 관심영역에 대한 픽셀 위치정보와, 움직임 객체 분석을 위한 각종 기준정보를 저장한다. 상기 기준정보는 움직임 분석에 적용되는 유효 광흐름벡터를 설정하기 위한 유효기준 크기정보와, 움직임 객체 판단을 위한 기준 빈도수, 움직임 상태 영역을 판단하기 위한 최소 기준각도범위/최대 기준각도범위/최소 기준픽셀범위가 포함된다. 여기서, 상기 기준각도범위는 차량의 전후진을 판단하기 위한 범위이며, 기준픽셀범위는 차량의 정차를 판단하기 위한 범위이다.The data memory 300 stores pixel position information of the ROI in the captured image and various reference information for analyzing the motion object. The reference information includes effective reference size information for setting an effective light flow vector applied to motion analysis, a reference frequency for determining a moving object, and a minimum reference angle range / maximum reference angle range / minimum reference for determining a motion state region. Pixel range is included. Here, the reference angle range is a range for determining the forward and backward movement of the vehicle, and the reference pixel range is a range for determining the stop of the vehicle.
상기 제어장치(400)는 상기 촬영장치(100)로부터 제공되는 촬영영상을 근거로 차량의 움직임에 상관없이 차량 도어 주변의 움직임 객체를 판단하고, 움직임 객체 판단결과를 근거로 이에 대응되는 알람정보를 상기 알람장치(200)를 통해 출력하도록 구성된다. The control device 400 determines a moving object around the vehicle door regardless of the movement of the vehicle based on the captured image provided from the photographing apparatus 100, and provides alarm information corresponding to the moving object based on the determination result of the moving object. It is configured to output through the alarm device 200.
도2는 도1에 도시된 제어장치(400)의 내부구성을 기능별로 분리하여 나타낸 도면이다.FIG. 2 is a diagram illustrating the internal structure of the control device 400 shown in FIG. 1 by function.
도2에 도시된 바와 같이 제어장치(400)는 특징점 검출모듈(410)과, 광흐름 벡터 생성모듈(420), 광흐름 맵 생성모듈(430) 및, 움직임 객체 분석모듈(440)을 포함하여 구성된다.As shown in FIG. 2, the control device 400 includes a feature point detection module 410, an optical flow vector generation module 420, an optical flow map generation module 430, and a motion object analysis module 440. It is composed.
상기 특징점 검출모듈(410)은 상기 촬영수단(100)으로부터 제공되는 현재 영상 프레임에 대한 특징점을 검출한다. 상기 특징점 검출모듈(410)은 공지의 해리스 코너(Harris Corner)검출기를 이용하여 도3의 (A)에 도시된 바와 같이, 해당 영상 프레임에서 모든 객체(O)에 대한 코너 특징점을 검출할 수 있다. 이때, 특징점 검출 속도를 향상시키기 위해 영상을 축소한 후 특징점 검출을 수행할 수 있다. 도3의 (B)는 특징점이 표시되는 촬영영상을 예시한 도면으로, 여기서 붉은색 원은 특징점을 나타낸다. The feature point detection module 410 detects a feature point for the current image frame provided from the photographing means 100. The feature point detection module 410 may detect corner feature points for all objects O in the corresponding image frame, as shown in FIG. 3A, using a known Harris Corner detector. . In this case, the feature point detection may be performed after the image is reduced to improve the feature point detection speed. 3B is a diagram illustrating a photographed image in which feature points are displayed, wherein a red circle represents a feature point.
상기 광흐름 벡터 생성모듈(420)은 상기 특징점 검출모듈(410)에서 검출된 객체의 코너 특징점의 위치 관계를 분석하여 이동방향에 대응되는 광흐름 정보를 획득하고, 이를 근거로 광흐름 벡터를 생성한다. The light flow vector generation module 420 analyzes the positional relationship of the corner feature points of the object detected by the feature point detection module 410 to obtain light flow information corresponding to the moving direction, and generates a light flow vector based on this. do.
즉, 상기 광흐름 벡터 생성모듈(420)은 이전 또는 이후 영상 프레임에 대해 현재 영상 프레임의 특징점에 대응되는 점을 Pyramidal Lucas Kanade 방법을 이용하여 추정한다. That is, the light flow vector generation module 420 estimates a point corresponding to a feature point of the current image frame with respect to a previous or subsequent image frame by using the Pyramidal Lucas Kanade method.
또한, 상기 광흐름 벡터 생성모듈(420)은 도4의 (A)에 도시된 바와 같이 두 프레임에서의 객체 예컨대, 현재 프레임에서의 객체(O2)와, 이전 프레임에서의 객체(O1)에 대해 상호 대응되는 각 특징점에 대한 위치 변화를 검출하고, 그 위치 변화에 대응되는 이동 방향성을 갖는 광흐름(Optical flow)을 획득한다. 이때, 정지 객체에 대한 광흐름은 "0"벡터가 된다. In addition, the light flow vector generation module 420 is an object in two frames, for example, the object O 2 in the current frame and the object O 1 in the previous frame, as shown in FIG. For example, the position change of each feature point corresponding to each other is detected, and an optical flow having a movement direction corresponding to the position change is obtained. At this time, the light flow for the stationary object becomes a "0" vector.
또한, 상기 광흐름 벡터 생성모듈(420)은 방사 및 원근 왜곡에 의한 영향을 제거하기 위하여 획득한 광흐름을 상방 시선 영상에서의 광흐름으로 변환한다. 일반적으로 사용되는 어안렌즈 카메라는 도4의 (B)에서 ①과 같이 카메라 자세에 따른 원근 왜곡과 렌즈특성에 따른 방사 왜곡이 포함된 형태의 촬영영상으로 출력된다. 이에 광흐름 벡터 생성모듈(420)은 ①과 같은 형태의 왜곡성분이 포함된 촬영영상을 기 설정된 보정계수를 이용하여 ② 와 같은 형태의 상방 시선 영상으로 변환한 상태에서 광흐름을 생성할 수 있다. 이때, 광흐름 정보만을 상방 시선 영상으로 변환하는 것도 가능하다. 즉, 상기 광흐름 벡터 생성모듈(420)은 기 설정된 보정 변수를 이용하여 광흐름의 시작 및 끝 점에 대하여 상방 시선 변환 알고리즘을 이용하여 좌표 변환함으로써, Bird's eye view 등과 같은 상방 시선 영상에서의 새로운 광흐름 벡터(OV)를 생성한다. 예컨대, 상기 광흐름 벡터(OV)는 도4의 (B)에 도시된 바와 같이, 생성될 수 있다.In addition, the light flow vector generation module 420 converts the obtained light flow into the light flow in the upward gaze image in order to remove the effects of radiation and perspective distortion. A fisheye lens camera generally used is output as a photographed image including perspective distortion according to a camera posture and radiation distortion according to lens characteristics as shown in ① in FIG. Accordingly, the light flow vector generation module 420 may generate the light flow in a state in which the photographed image including the distortion component in the form of ① is converted into an upward gaze image in the form of ② using a preset correction coefficient. . In this case, it is also possible to convert only the light flow information into an upward gaze image. That is, the light flow vector generation module 420 transforms coordinates of the start and end points of the light flow by using an upward gaze transformation algorithm by using a preset correction parameter, thereby generating a new image in an upward gaze image such as a bird's eye view. Generate a light flow vector (OV). For example, the light flow vector OV may be generated as shown in FIG. 4B.
이때, 상기 광흐름 벡터 생성모듈(420)은 광흐름 정보를 연산하는 과정에서 발산하는 경우가 발생할 수 있는 바, 이를 제거하기 위하여 광흐름 정보를 물리적으로 움직임이 일어날 수 있는 범위로 한정하는 것이 바람직하다. 예컨대, 상기 광흐름 벡터 생성모듈(420)은 변화발생 객체를 포함하는 모든 특징점에 대해 생성된 광흐름 벡터의 크기, 즉 픽셀 차이가 기 설정된 유효 기준 크기, "30 픽셀" 이하인 광흐름 벡터만을 유효 광흐름 벡터로 설정할 수 있다.  In this case, the light flow vector generation module 420 may emit in the process of calculating the light flow information. Therefore, the light flow vector generation module 420 is preferably limited to the range in which the light flow information may be physically moved. Do. For example, the light flow vector generation module 420 is effective only for the light flow vector having the magnitude of the light flow vector generated for all the feature points including the change occurrence object, that is, the pixel difference is less than or equal to the preset effective reference size, "30 pixels". Can be set as light flow vector.
상기 광흐름 맵 생성모듈(430)은 차량의 상태 및 객체의 움직임 여부를 판단하기 위해 사용할 광흐름 맵을 생성한다. 이때, 차량의 상태를 판단하기 위한 제1 광흐름 맵은 영상 프레임 전체에서의 광흐름 벡터를 근거로 생성되고, 객체의 움직인 여부를 판단하기 위한 제2 광흐름 맵은 기 설정된 관심영역에서의 광흐름 벡터를 근거로 생성된다. The light flow map generation module 430 generates a light flow map to be used to determine the state of the vehicle and the movement of the object. In this case, the first light flow map for determining the state of the vehicle is generated based on the light flow vector in the entire image frame, and the second light flow map for determining whether the object is moved is determined in the preset ROI. Generated based on the light flow vector.
즉, 상기 광흐름 맵 생성모듈(430)은 상기 광흐름 벡터 생성모듈(420)에서 유효 광흐름 벡터를 이산화하여 크기와 각도에 대응되는 도5의 (A) 또는 도5의 (B)와 같은 광흐름 정보 산출 과정을 통해 제 1및 제2 광흐름 맵을 각각 생성한다. 즉, 상기 광흐름 맵 생성모듈(430)은 예컨대, (1,0)(1,2)(1,3)의 3개 광흐름에 대해 도5의 (A)에 도시된 바와 같이 광흐름 위치 좌표별 크기 및 각도에 대한 테이블을 생성하고, 이를 이용하여 도5의 (B)와 같이 크기 및 각도에 대한 그래프 형태의 광흐름 분포도를 생성한다. 여기서, 상기 도5의 (A)와 도5의 (B)에는 크기는 "0.5" 픽셀, 각도는 "1°"로 이산화된 데이터가 도시되어 있다. 즉, 광흐름 (1,2)에 대한 실제 크기 "2.236"에 대해서는 이를 "0.5" 픽셀로 이산화하여 "4"로 설정하고, 광흐름 (1,2)에 대한 실제 각도 "63.435"에 대해서는 이를 "1°"로 이산화하여 "63"으로 설정한다.That is, the light flow map generation module 430 discretizes the effective light flow vector in the light flow vector generation module 420 so as to correspond to the size and angle, as shown in FIG. 5A or 5B. The first and second light flow maps are generated through the light flow information calculation process. That is, the light flow map generation module 430, for example, for the three light flows of (1,0) (1,2) (1,3), as shown in FIG. A table of sizes and angles for each coordinate is generated, and a light flow distribution chart in a graph form for sizes and angles is generated as shown in FIG. 5A and 5B show data discretized with a size of "0.5" pixels and an angle of "1 °". That is, for the actual size "2.236" for the light flow (1,2) it is discretized to "0.5" pixels and set to "4", and for the actual angle "63.435" for the light flow (1,2) Discrete to "1 °" and set to "63".
상기 움직임 객체 분석모듈(440)은 상기 광흐름 맵 생성모듈(430)에서 생성된 제1 광흐름 맵에서 기 설정된 움직임 상태영역별 광흐름 빈도수를 근거로 차량에 대한 움직임 상태를 판단함과 더불어, 제2 광흐름 맵을 통해 움직임 객체의 존재여부를 판단한다. The motion object analysis module 440 determines a motion state of the vehicle on the basis of the light flow frequency for each motion state region in the first light flow map generated by the light flow map generation module 430. The existence of the moving object is determined based on the second light flow map.
상기 움직임 분석모듈(440)은 먼저 제1 광흐름 맵을 분석하여 차량의 현재 움직임 상태를 분석한다. 즉, 상기 움직임 분석모듈(440)은 도6에 도시된 바와 같이 상태 영역별 광흐름의 빈도수를 산출하여 그 빈도수가 최대인 분포 영역에 따라 정차상태(STOP), 전진상태(FORWARD), 후진상태(BACKWARD)로 분석한다. 정차상태는 차량이 정차하고 있는 경우 대부분의 특징점 위치는 변화가 없으므로 광흐름은 "0" 벡터가 된다. 다시 말해 정차상태 영역은 비교적 변화가 적은 영역, 예컨대 광흐름 벡터의 크기가 "1" 픽셀 미만인 영역으로 설정될 수 있다. 상기 전진상태 영역은 각도의 변화가 비교적 적은 영역, 즉 기 설정된 최소 기준각도범위인 영역으로 예컨대, "-10 ~+10"의 각도범위를 갖는 영역으로 설정될 수 있다. 상기 후진상태 영역은 각도의 변화가 비교적 큰 영역 즉, 최대 기준각도 범위인 영역으로 예컨대 "-170 ~-180","+170 ~+180" 각도 범위를 갖는 영역으로 설정될 수 있다. 즉, 상기 움직임 분석모듈(440)은 현재 촬영영상에서 정차상태 영역에서의 광흐름 빈도수가 가장 높은 경우, 현재 차량은 정차상태로 판단한다. The motion analysis module 440 first analyzes the current light state of the vehicle by analyzing the first light flow map. That is, the motion analysis module 440 calculates the frequency of the light flow for each state region as shown in FIG. 6 and stops, forwards, and reverses according to the distribution region having the maximum frequency. Analyze with (BACKWARD). In the stopped state, the light flow becomes a "0" vector because most feature point positions remain unchanged when the vehicle is stopped. In other words, the stationary region may be set to a relatively small region, for example, an region in which the size of the light flow vector is less than "1" pixel. The advanced state region may be set as a region having a relatively small change in angle, that is, a region having a predetermined minimum reference angle range, for example, an area having an angle range of "-10 to +10". The backward state region may be set as a region having a relatively large change in angle, that is, a region having a maximum reference angle range, for example, a region having an angle range of "-170 to -180" and "+170 to +180". That is, the motion analysis module 440 determines that the current vehicle is in the stopped state when the frequency of light flow in the stationary state region is the highest in the current photographed image.
또한, 상기 움직임 분석모듈(440)은 제2 광흐름 맵에서 차량의 현재 상태 이외의 영역에서 국부적인 광흐름 최대 빈도수가 기 설정된 기준 빈도수 이상인 영역이 존재하는 경우 움직임 객체가 존재한다고 판단한다. 예컨대, 차량이 정차상태인 경우, 제2 광흐름 맵에서 정차상태영역 이외의 영역에서 최대 빈도수가 존재하는 경우 움직임 객체가 존재하는 것으로 판단한다. In addition, the motion analysis module 440 determines that a motion object exists when there is an area in the second light flow map other than the current state of the vehicle in which the local light flow maximum frequency is greater than or equal to a preset reference frequency. For example, when the vehicle is in a stopped state, it is determined that a moving object exists when a maximum frequency exists in an area other than the stopped state area in the second light flow map.
또한, 상기 움직임 분석모듈(440)은 상술한 움직임 객체에 대한 검출결과를 근거로 이에 대응되는 사고예방 알람정보를 상기 알람장치(200)로 제공하도록 구성된다. In addition, the motion analysis module 440 is configured to provide the accident prevention alarm information corresponding to the alarm device 200 based on the detection result of the above-described moving object.
한편, 본 발명에 있어서는 또 다른 실시예로서, 상기 움직임 분석모듈(440)은 차량이 전진 또는 후진한다고 판단되는 상태에서 움직임 객체가 검출되지 않는 경우, 차량과 객체가 동일한 방향으로 움직이는 경우를 고려하여, 제1 광흐름 맵에서 최고 빈도수 영역과, 제2 광흐름 맵에서 최고 빈도수 영역이 서로 상이한 경우 움직임 객체가 존재하는 것으로 판단할 수 있다. 예컨대, 상기 움직임 분석모듈(440)은 도7에서 제1 광흐름 맵에서 차량 상태에 대응되는 제1 빈도수(Total)를 산출하고, 제2 광흐름 맵에서 움직임 객체에 대응되는 제2 빈도수(ROI)를 산출하여, 제1 빈도수가 가장 높은 영역 이외의 영역에서 제2 빈도수가 최고인 영역이 존재하는 경우 움직임 객체가 존재하는 것으로 판단한다. 즉, 도6b에서 차량 상태영역은 "X"이고, 움직임 객체 영역은 차량 상태영역 이외의 영역에서 최고의 빈도수를 갖는 제2 빈도수 출현 영역인 "Y"가 된다. On the other hand, according to another embodiment of the present invention, when the moving object is not detected in the state that the motion analysis module 440 is determined to move forward or backward, considering the case where the vehicle and the object moves in the same direction When the highest frequency region is different from the highest frequency region in the first light flow map, it may be determined that the moving object exists. For example, the motion analysis module 440 calculates a first frequency Total corresponding to the vehicle state in the first light flow map in FIG. 7, and a second frequency ROI corresponding to the motion object in the second light flow map. ), It is determined that a moving object exists when there is an area having the second highest frequency in an area other than the area having the highest first frequency. That is, in Fig. 6B, the vehicle state area is "X", and the moving object area is "Y", which is the second frequency occurrence area having the highest frequency in areas other than the vehicle state area.
또한, 본 발명에 있어서는 또 다른 실시예로서, 상기 움직임 분석모듈(440)은 차량에 객체가 끼임이 발생한 것을 판단하도록 구성될 수 있다. 이 경우 객체가 차량에 끼이게 되면 차량 이동시 같이 움직이게 된다. 따라서, 움직임 분석모듈(440)은 이를 고려하여 도8과 같이 수직한 방향, 다시 말해 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에서 일정 빈도수 이상의 광흐름이 일정 시간 이상 지속적으로 나타나는 경우, 끼임 객체가 발생한 것으로 판단한다.In addition, according to another embodiment of the present invention, the motion analysis module 440 may be configured to determine that an object is jammed in the vehicle. In this case, when the object is caught in the vehicle, the object moves with it. Accordingly, the motion analysis module 440 considers this and the light flow in the vertical direction as shown in FIG. If it appears continuously for more than a certain time, it is determined that the jammed object has occurred.
또한, 본 발명에 있어서는 또 다른 실시예로서, 상기 실시예에 있어서는 전체 영상 프레임에서 광흐름 맵을 이용하여 차량의 상태를 판단하도록 실시하였으나, 차량의 상태는 차량 자체의 장치로부터 제공받도록 실시하는 것도 가능하다.In addition, according to the present invention, in the above embodiment, the state of the vehicle is determined by using the light flow map in the entire image frame, but the state of the vehicle may be provided by the apparatus of the vehicle itself. It is possible.
즉, 본 발명에 있어서는 차량에 구비되는 차량상태 정보 제공수단(미도시)을 통해 차량의 현재 상태를 제공받을 수 있다. 따라서, 본 발명에 있어서는 영상프레임에서 기 설정된 관심영역에 대해서만 상술한 바와 같이 광흐름 맵을 생성하여 움직임 객체를 판단하도록 실시할 수 있다. 이 경우, 차량 상태 판단을 위한 광흐름 맵을 생성하여 분석하는 과정이 생략되어 동일 시스템 사용시 처리 속도가 향상될 수 있다. That is, in the present invention, the current state of the vehicle may be provided through vehicle state information providing means (not shown) provided in the vehicle. Therefore, according to the present invention, as described above, only the region of interest preset in the image frame may be generated to determine the motion object by generating the light flow map. In this case, a process of generating and analyzing a light flow map for determining a vehicle state may be omitted, and thus the processing speed may be improved when using the same system.
또한, 본 발명에 있어서는 또 다른 실시예로서, 제2 광흐름맵을 통해 차량 주변 관심영역에서의 움직임 객체를 우선적으로 판단하여 움직임 객체가 존재하는 경우 차량의 상태를 판단하지 않고 움직임 객체에 대한 알람을 제공하도록 함으로써, 동일 시스템 사용시 처리 속도를 보다 향상시킬 수 있다.In addition, according to another embodiment of the present invention, if a moving object exists by first determining a moving object in the ROI around the vehicle through the second light flow map, an alarm for the moving object is not determined without determining the state of the vehicle. By providing this, the processing speed can be further improved when using the same system.
즉, 상기 실시예에 있어서는 제2 광흐름 맵에서 도9에 도시된 바와 같이 차량 상태(정차,전진,후진)에 대응되는 영역 이외의 영역에서 움직임 객체가 존재하는 경우, 움직임 객체에 대한 알람을 제공한다. 또한, 제2 광흐름 맵에서 움직임 객체가 존재하지 않는 경우에는 제1 광흐름맵을 통해 차량 상태를 판단한 후, 차량 상태에 대응되는 영역에 대해 움직임 객체가 존재하는지를 확인함으로써, 움직임 객체에 대한 알람을 제공하도록 구성된다. 이때, 차량의 현재 상태정보는 차량에 구비되는 차량상태 정보 제공수단(미도시)을 통해 제공받을 수 있으며, 현재 차량의 상태에 대하여 움직임 객체가 존재하지 않는 경우, 끼임 객체를 판단하여 알람을 제공하도록 실시할 수 있다. That is, in the above embodiment, when the moving object exists in an area other than the area corresponding to the vehicle state (stopping, moving forward, backward) as shown in FIG. 9 in the second light flow map, an alarm for the moving object is generated. to provide. In addition, when the moving object does not exist in the second light flow map, the vehicle state is determined through the first light flow map, and then an alarm is detected for the moving object by checking whether a moving object exists in an area corresponding to the vehicle state. It is configured to provide. In this case, the current state information of the vehicle may be provided through vehicle state information providing means (not shown) provided in the vehicle, and if a moving object does not exist with respect to the current state of the vehicle, it determines the jammed object and provides an alarm. Can be done.
이어, 도7을 참조하여 본 발명에 따른 촬영영상에서의 움직임 객체 인식 방법을 설명한다.Next, a method of recognizing a motion object in a captured image according to the present invention will be described with reference to FIG. 7.
먼저, 촬영장치(100)는 일정 주기로 촬영영상을 제어장치(400)로 제공한다. 이때, 상기 촬영장치(100)는 차량의 도어 상측에 설치되되 그 촬영영역이 차량 주변의 위험영역을 포함하도록 설치될 수 있다. First, the photographing apparatus 100 provides a photographed image to the control apparatus 400 at a predetermined cycle. In this case, the photographing apparatus 100 may be installed above the door of the vehicle, but the photographing apparatus may be installed to include a dangerous area around the vehicle.
상기한 상태에서 상기 제어장치(400)는 상기 촬영장치(200)로부터 촬영영상이 수신되면, 현재 영상 프레임에서 해리스 코너 검출기를 이용하여 객체에 대한 특징점을 검출한다(ST10). In the above state, when the captured image is received from the photographing apparatus 200, the control device 400 detects a feature point of the object using a Harris corner detector in the current image frame (ST10).
이어, 상기 제어장치(400)는 이전 영상 프레임에서 현재 영상 프레임의 코너 특징점에 대응되는 특징점을 추정하고, 두 영상 프레임에서의 코너 특징점간의 위치 변화에 대응되는 광흐름 정보를 획득한다(ST20). 즉, 도4의 (A)에 도시된 바와 같이 상기 제어장치(400)는 해당 객체에 대해 현재 영상 프레임에서의 코너 특징점과 이에 대응되는 이전 영상 프레임에서의 코너 특징점간의 위치 변화에 대응되는 광흐름 정보를 생성한다. Subsequently, the control device 400 estimates a feature point corresponding to the corner feature point of the current image frame from the previous image frame, and obtains light flow information corresponding to the positional change between the corner feature points in the two image frames (ST20). That is, as shown in (A) of FIG. 4, the control device 400 has a light flow corresponding to a position change between a corner feature point in the current image frame and a corner feature point in the previous image frame corresponding to the object. Generate information.
상기 제어장치(400)는 광흐름 정보가 포함된 영상을 상방시선 영상으로 변환하고, 상방시선 영상에서의 광흐름 정보에 대응되는 광흐름 벡터를 생성한다(ST30, ST40). 즉, 상기 제어장치(400)는 해당 객체의 광흐름 정보에서 이동 거리에 대응되는 크기와, 이동 방향에 대응되는 각도정보로 이루어지는 광흐름 벡터를 생성한다. 이때, 상기 제어장치(400)는 광흐름 벡터 중 그 크기가 기 설정된 크기 기준 픽셀, 예컨대 "30" 픽셀 미만인 광흐름 벡터만을 유효 광흐름 벡터로 설정할 수 있다. 즉, 해당 상방시선 영상에서 기준 픽셀 크기 이상의 광흐름 벡터는 클리어시킨다. 또한, 상기 상방시선 광흐름 벡터는 광흐름 벡터정보만을 상방시선에 대응되도록 좌표변환함으로써 생성될 수 있다.The control device 400 converts an image including light flow information into an upward line of sight image and generates a light flow vector corresponding to the light flow information in the upward line of sight image (ST30 and ST40). That is, the control device 400 generates a light flow vector including a size corresponding to the moving distance and angle information corresponding to the moving direction in the light flow information of the object. At this time, the control device 400 may set only the light flow vector of which the size is less than a predetermined size reference pixel, for example, “30” pixels, among the light flow vectors as the effective light flow vector. That is, the light flow vector larger than the reference pixel size is cleared in the corresponding upward line image. In addition, the upward gaze light flow vector may be generated by coordinate transformation so that only the light flow vector information corresponds to the upward gaze.
상기 제어장치(400)는 유효 광흐름 벡터를 근거로 각 광흐름 벡터의 크기와 각도에 대한 광흐름 맵을 생성한다(ST50). 즉, 제어장치(400)는 도5의 (A)에 도시된 바와 같이 광흐름 테이블을 생성함과 더불어, 이에 대응되도록 도5의 (B)에 도시된 바와 같은 그래프 형태의 광흐름 분포도를 생성한다. 이때, 상기 제어장치(400)는 전체 영상프레임에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제1 광흐름 맵을 생성함과 더불어, 영상프레임에서 기 설정된 관심영역에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제2 광흐름 맵을 생성한다.The controller 400 generates a light flow map for the size and angle of each light flow vector based on the effective light flow vector (ST50). That is, the controller 400 generates a light flow table as shown in FIG. 5A, and generates a light flow distribution chart in a graph form as shown in FIG. 5B so as to correspond thereto. do. In this case, the control device 400 generates a first light flow map of light flow frequency for each region based on the light flow vector for the entire image frame, and also the light flow vector for the region of interest preset in the image frame. Generate a second light flow map consisting of the light flow frequency for each region based on the.
이후, 제어장치(400)는 상기 제1 광흐름 맵을 분석하여 기 설정된 차량 상태 영역별 광흐름 빈도수를 산출하고, 이를 근거로 차량의 현재 상태를 판단한다(ST60). 즉, 상기 제어장치(400)는 도6에 도시된 바와 같이 전체 영상에서의 광흐름 빈도수를 근거로 차량의 정지 상태와, 전진 상태 및 후진 상태를 판단한다.Thereafter, the control device 400 analyzes the first light flow map to calculate the light flow frequency for each preset vehicle state region, and determines the current state of the vehicle based on this (ST60). That is, the control device 400 determines the stationary state, the forward state and the reverse state of the vehicle based on the light flow frequency in the entire image as shown in FIG. 6.
또한, 상기 제어장치(400)는 기 설정된 관심영역에 대한 제2 광흐름 맵에서 광흐름 빈도수를 확인하여 움직임 객체 여부를 판단한다(ST70). 즉, 상기 제어장치(400)는 기 설정된 관심영역 내에 존재하는 광흐름 벡터만을 이용하여 현재 차량 상태 영역을 제외한 나머지 영역에서 최대 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단한다.In addition, the control device 400 determines the presence or absence of a moving object by checking the light flow frequency in the second light flow map for the preset ROI (ST70). That is, the control device 400 determines that the moving object exists when the maximum frequency is greater than or equal to the reference frequency in the remaining regions other than the current vehicle state region using only the light flow vector existing in the preset ROI.
이때, 상기 제어장치(400)는 추가적으로 차량과 동일하게 움직이는 움직임 객체에 대한 존재 여부를 판단하거나 또는 차량에 끼임 객체가 존재하는지를 판단한다. In this case, the control device 400 additionally determines whether there is a moving object moving in the same manner as the vehicle, or determines whether there is a jammed object in the vehicle.
한편, 도10에 있어서는 제1 광흐름 맵을 통해 차량의 상태를 판단한 후, 제2 광흐름 맵을 이용하여 움직임 객체를 판단하도록 실시하였으나, 도11에 도시된 바와 같이 상기 제2 광흐름 맵에 의해 산출된 광흐름 빈도수가 기준 빈도 수 이상인 영역이 상기 차량 상태 영역 이외의 영역에 존재하지 않는 경우, 제1 광흐름 맵에 의해 산출된 차량 상태에 대응되는 움직임 객체 판단영역에 대한 움직임 객체 존재 여부를 판단하도록 실시하는 것도 가능하다.Meanwhile, in FIG. 10, after determining the state of the vehicle through the first light flow map, the moving object is determined using the second light flow map. However, as shown in FIG. 11, the second light flow map is applied to the second light flow map. If there is no area in the area other than the vehicle state area in which the light flow frequency calculated by the first frequency is greater than or equal to the vehicle state area, there is a motion object for the motion object determination area corresponding to the vehicle state calculated by the first light flow map. It is also possible to implement to judge.
즉, 도11에 도시된 바와 같이 도10에서 제1 및 제2 광흐름 맵이 생성된 상태에서, 제어장치(400)는 제2 광흐름 맵을 이용하여 도9에 도시된 차량 상태 판단 영역 이외의 영역 움직임 객체가 존재하는지를 판단한다(ST110).That is, as shown in FIG. 11, in the state in which the first and second light flow maps are generated in FIG. 10, the control device 400 uses the second light flow map to use the second light flow map other than the vehicle state determination region shown in FIG. 9. It is determined whether an area motion object exists at step ST110.
상기 제어장치(400)는 상기 ST110 단계에서 움직임 객체가 존재한다고 판단되는 경우, 사고예방 알람정보를 출력한다(ST120).If it is determined in step ST110 that the moving object exists, the control device 400 outputs accident prevention alarm information (ST120).
한편, 상기 제어장치(400)는 상기 ST110 단계에서 움직임 객체가 존재하지 않는다고 판단되면, 제1 광흐름 맵을 이용하여 차량의 현재 상태를 분석한다(ST130).On the other hand, if it is determined in step ST110 that the moving object does not exist, the control device 400 analyzes the current state of the vehicle using the first light flow map (ST130).
이후, 상기 제어장치(400)는 상기 ST130 단계에서 분석된 차량의 상태에 대응하여 제1 및 제2 광흐름맵을 이용하여 움직임 객체 여부를 재 확인한다.Thereafter, the control device 400 re-confirms whether the object is a moving object using the first and second light flow maps corresponding to the state of the vehicle analyzed in step ST130.
즉, 상기 제어장치(400)는 상기 ST130 단계에서 현재 차량이 정차 상태라고 판단되면, 제2 광흐름 맵에서 차량 정차 영역(도6의 (a))을 제외한 영역에 움직임 객체가 존재하는 경우 사고 예방 알람정보를 출력한다(ST140).That is, when it is determined in step ST130 that the current vehicle is in a stopped state, the control device 400 has an accident when a moving object exists in an area except for the vehicle stopping area ((a) of FIG. 6) in the second light flow map. Output the preventive alarm information (ST140).
또한, 상기 제어장치(400)는 상기 ST130 단계에서 현재 차량이 전진 상태라고 판단되면, 제2 광흐름 맵에서 차량 전진 영역(도6의 (b))을 제외한 영역에 움직임 객체가 존재하는 경우, 사고 예방 알람정보를 출력한다(ST150). In addition, when the control device 400 determines that the current vehicle is in the advanced state at step ST130, when the moving object exists in an area other than the vehicle forward area (b) of the second light flow map, The accident prevention alarm information is output (ST150).
또한, 상기 제어장치(400)는 상기 ST130 단계에서 현재 차량이 후진 상태라고 판단되면, 제2 광흐름 맵에서 차량 후진 영역(도6의 (c))을 제외한 영역에 움직임 객체가 존재하는 경우 사고 예방 알람정보를 출력한다(ST160).In addition, when it is determined that the current vehicle is in the reverse state in step ST130, the control device 400 has an accident when there is a moving object in an area other than the vehicle reverse area ((c) of FIG. 6) in the second light flow map. Output the preventive alarm information (ST160).
한편, 상기 제어장치(400)는 상기 ST150 및 ST160 단계에서 차량 전진 영역 또는 차량 후진 영역 이외의 영역에서 움직임 객체가 존재하지 않는 경우, 차량 끼임 객체가 존재하는지를 확인하여 끼임 객체가 확인되는 경우, 사고 예방 알람정보를 출력한다(ST170). 이때, 상기 제어장치(400)는 도8과 같이 수직한 방향, 다시 말해 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에서 일정 빈도수 이상의 광흐름이 일정 시간 이상 지속적으로 나타나는 경우, 끼임 객체가 발생한 것으로 판단한다.On the other hand, the control device 400 when the moving object is not present in the area other than the vehicle forward area or the vehicle reverse area in the step ST150 and ST160, if the jammed object is confirmed by checking whether the vehicle jammed object exists, Output the preventive alarm information (ST170). At this time, the control device 400, as shown in FIG. If the error persists, it is determined that a jammed object has occurred.
따라서, 상기 실시예에 의하면, 촬영영상의 비교를 통해 움직임 객체가 존재하는지를 검출하되, 촬영영상에서의 움직임 객체에 대한 이동 변화에 의해 획득되는 광흐름 벡터를 이용하여 차량의 상태에 상관없이 차량 주변의 움직임 객체 검출을 보다 정확하게 검출하는 것이 가능하게 된다. Therefore, according to the embodiment, it is detected whether a moving object exists through comparison of the captured image, but the surrounding of the vehicle regardless of the state of the vehicle by using the light flow vector obtained by the change of movement with respect to the moving object in the captured image It is possible to more accurately detect the motion object detection of.
상기에서는 본 발명의 바람직한 실시예를 참조하여 설명하였지만, 해당 기술 분야의 숙련된 당업자는 하기의 특허 청구의 범위에 기재된 본 발명의 사상 및 영역으로부터 벗어나지 않는 범위 내에서 본 발명을 다양하게 수정 및 변경시킬 수 있음을 이해할 수 있을 것이다. Although described above with reference to a preferred embodiment of the present invention, those skilled in the art will be variously modified and changed within the scope of the invention without departing from the spirit and scope of the invention described in the claims below I can understand that you can.

Claims (20)

  1. 차량에 설치되어 차량의 외측 주변을 촬영하는 촬영장치와,A photographing apparatus installed in the vehicle and photographing the outer periphery of the vehicle;
    사고예방 알림정보를 출력하는 알림장치 및,Notification device for outputting accident prevention notification information,
    상기 촬영장치로부터 제공되는 영상프레임에서 객체에 대한 특징점을 검출하고, 현재 영상 프레임과 이전 영상 프레임에서의 특징점 이동 변화에 대응되는 광흐름 벡터를 생성하며, 광흐름 벡터를 크기와 각도에 대해 이산화하여 기 설정된 차량 상태 판단 영역 이외 영역에서의 최대 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하여 상기 알림장치를 통해 이에 대응되는 알람정보를 송출하도록 제어하는 제어장치를 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템. Detecting a feature point for an object in an image frame provided from the photographing apparatus, generating a light flow vector corresponding to the change of the feature point movement in the current image frame and the previous image frame, and discretizing the light flow vector with respect to the size and angle And a control device for determining that a moving object exists when the maximum frequency in a region other than the preset vehicle state determination region is greater than or equal to the reference frequency and controlling to transmit corresponding alarm information through the notification apparatus. Getting on and off accident prevention system.
  2. 제1항에 있어서,The method of claim 1,
    상기 제어장치는 현재 영상 프레임에서 객체에 대한 코너 특징점을 검출하는 특징점 검출모듈과,The control device includes a feature point detection module for detecting a corner feature point for an object in a current image frame;
    상기 특징점 검출모듈에서 검출된 현재 영상프레임에서의 코너 특징점에 대응되는 이전 또는 이후 영상프레임의 특징점을 추정하고, 두 영상 프레임의 코너 특징점간의 위치 관계를 근거로 이동방향에 대응되는 광흐름을 검출함과 더불어, 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 광흐름 벡터 생성모듈, The feature point detection module estimates the feature points of the previous or subsequent image frames corresponding to the corner feature points in the current image frame and detects the light flow corresponding to the moving direction based on the positional relationship between the corner feature points of the two image frames. In addition, the light flow vector generation module for generating a light flow vector comprising a size corresponding to the movement change distance for the object and an angle corresponding to the movement direction,
    상기 광흐름 벡터 생성모듈로부터 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하고, 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 광흐름 맵 생성모듈 및, Generating an optical flow map having an optical flow size axis and an angular axis based on the optical flow vector provided from the optical flow vector generating module, and generating an optical flow map for calculating an optical flow frequency for a predetermined region in the optical flow map. Module and,
    상기 광흐름 생성모듈에서 생성된 광흐름 맵에서 기 설정된 상태판단 영역에 이외 영역에서의 최대 광흐름 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하는 움직임 분석모듈을 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템. And a motion analysis module that determines that a motion object exists when the maximum light flow frequency in a region other than the preset state determination region is greater than a reference frequency in the light flow map generated by the light flow generation module. Getting on and off accident prevention system.
  3. 제2항에 있어서,The method of claim 2,
    상기 광흐름 벡터 생성모듈은 광흐름 정보가 포함된 영상을 상방시선 좌표변화처리하고, 이 상방시선 영상에서 광흐름 벡터를 생성하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방 시스템.The light flow vector generation module is configured to process the image of the light flow information up-view coordinates, and to generate a light flow vector from the image of the upward line vehicle accident prevention system.
  4. 제2항 또는 제3항에 있어서,The method according to claim 2 or 3,
    상기 광흐름 벡터 생성모듈은 상기 광흐름 벡터의 크기가 기 설정된 유효 기준 픽셀 미만인 광흐름 벡터만 유효 광흐름 벡터로 설정하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템.And the light flow vector generation module is configured to set only a light flow vector having a size of the light flow vector less than a preset effective reference pixel as an effective light flow vector.
  5. 제2항에 있어서,The method of claim 2,
    상기 움직임 분석모듈은 광흐름 맵에서 최대 빈도수의 광흐름 영역을 근거로 차량의 정차, 전진, 후진 상태를 판단하되, The motion analysis module determines the stop, forward and reverse states of the vehicle based on the light frequency region of the maximum frequency in the light flow map.
    크기축과 각도축으로 이루어지는 광흐름 맵에서 광흐름의 크기가 최소 기준 픽셀 미만인 영역을 정차상태 영역으로 설정하고, 최소 기준 각도 범위로서 각도의 변화가 적은 영역을 전진상태 영역으로 설정하며, 최대 기준 각도 범위로서 각도의 변화가 큰 영역을 후진상태 영역으로 설정하여, 차량의 움직임 상태를 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템.In the light flow map composed of the size axis and the angular axis, the area where the light flow size is less than the minimum reference pixel is set as the stationary area, and the area where the change of the angle is small as the minimum reference angle range is set as the forward area. And setting an area having a large change in angle as an angle range as a reverse state area to determine a moving state of the vehicle.
  6. 제2항에 있어서,The method of claim 2,
    상기 광흐름 맵 생성모듈은 전체 영상프레임에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제1 광흐름 맵을 생성함과 더불어, 영상프레임에서 기 설정된 관심영역에 대한 광흐름 벡터를 근거로 영역별 광흐름 빈도수로 이루어지는 제2 광흐름 맵을 생성하도록 구성되고,The light flow map generation module generates a first light flow map including light flow frequency for each region based on the light flow vector for the entire image frame, and based on the light flow vector for the region of interest preset in the image frame. To generate a second light flow map comprising light frequency per region,
    상기 움직임 분석모듈은 상기 제1 광흐름 맵을 분석하여 차량의 정차, 전진, 후진 상태를 판단하고, 상기 제2 광흐름 맵을 분석하여 움직임 객체의 존재 여부를 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템. The motion analysis module is configured to analyze the first light flow map to determine the stop, forward and backward states of the vehicle, and analyze the second light flow map to determine whether a moving object exists. Unloading accident prevention system.
  7. 제6항에 있어서,The method of claim 6,
    상기 움직임 분석모듈은 움직임 객체가 검출되지 않는 경우, 제1 광흐름 맵에서 최고 빈도수 영역과, 제2 광흐름 맵에서 최고 빈도수 영역이 서로 상이한 경우 움직임 객체가 존재하는 것으로 판단하도록 구성되는 것을 특징으로 하는 차량 승하차 사고 예방 시스템. The motion analysis module is configured to determine that the motion object exists when the highest frequency region in the first light flow map and the highest frequency region in the second light flow map are different from each other when the motion object is not detected. Getting on and off accident prevention system.
  8. 제2항 또는 제5항 또는 제6항 중 어느 한 항에 있어서,The method according to claim 2 or 5 or 6,
    상기 움직임 분석모듈은 광흐름 맵에서 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에 대해 일정 빈도수 이상의 광흐름이 지속하여 검출되는 경우 끼임 객체가 발생한 것으로 판단하는 것을 특징으로 하는 차량 승하차 사고 예방 시스템. The motion analysis module determines that a jammed object has occurred when the light flow continues to be detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range in the light flow map. Vehicle getting on and off accident prevention system, characterized in that.
  9. 차량에 설치되어 차량의 외측 주변을 촬영하는 촬영장치와,A photographing apparatus installed in the vehicle and photographing the outer periphery of the vehicle;
    차량의 현재 상태를 제공하기 위한 차량 상태정보 제공수단, Vehicle status information providing means for providing a current status of the vehicle,
    사고예방 알림정보를 출력하는 알림장치 및,Notification device for outputting accident prevention notification information,
    상기 촬영장치로부터 제공되는 촬영영상의 관심영역에서 객체에 대한 특징점을 검출하고, 현재 영상 프레임과 이전 영상 프레임의 관심영역에서의 특징점 이동 변화에 대응되는 광흐름 벡터를 생성하며, 광흐름 벡터를 크기와 각도에 대해 이산화하여 상기 차량 상태정보 제공수단으로부터 제공되는 차량 상태에 대응되는 차량 상태 영역 이외의 영역에서의 최대 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하여 상기 알림장치를 통해 이에 대응되는 알람정보를 송출하도록 제어하는 제어장치를 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템.A feature point of an object is detected in a region of interest of the captured image provided by the photographing apparatus, and a light flow vector corresponding to a change of the feature point movement in the region of interest of the current image frame and the previous image frame is generated, and the light flow vector is sized. And if the maximum frequency in an area other than the vehicle state area corresponding to the vehicle state provided by the vehicle state information providing means is discretized with respect to the angle and the angle is greater than or equal to the reference frequency, it is determined that the moving object exists and responds through the notification device. Vehicle getting on and off accident prevention system, characterized in that it comprises a control device for controlling to send the alarm information.
  10. 제9항에 있어서,The method of claim 9,
    상기 제어장치는 현재 영상 프레임에서 객체에 대한 코너 특징점을 검출하는 특징점 검출모듈과,The control device includes a feature point detection module for detecting a corner feature point for an object in a current image frame;
    상기 특징점 검출모듈에서 검출된 현재 영상프레임에서의 코너 특징점에 대응되는 이전 또는 이후 영상프레임의 특징점을 추정하고, 두 영상 프레임의 코너 특징점간의 위치 관계를 근거로 이동방향에 대응되는 광흐름을 검출함과 더불어, 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 광흐름 벡터 생성모듈, The feature point detection module estimates the feature points of the previous or subsequent image frames corresponding to the corner feature points in the current image frame and detects the light flow corresponding to the moving direction based on the positional relationship between the corner feature points of the two image frames. In addition, the light flow vector generation module for generating a light flow vector comprising a size corresponding to the movement change distance for the object and an angle corresponding to the movement direction,
    상기 광흐름 벡터 생성모듈로부터 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하고, 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 광흐름 맵 생성모듈 및, Generating an optical flow map having an optical flow size axis and an angular axis based on the optical flow vector provided from the optical flow vector generating module, and generating an optical flow map for calculating an optical flow frequency for a predetermined region in the optical flow map. Module and,
    상기 광흐름 생성모듈에서 생성된 광흐름 맵에서 차량의 현재 상태에 대응되는 영역 이외 영역에서의 최대 광흐름 빈도수가 기준 빈도수 이상인 경우 움직임 객체가 존재하는 것으로 판단하는 움직임 분석모듈을 포함하여 구성되는 것을 특징으로 하는 차량 승하차 사고 예방시스템. And a motion analysis module that determines that a motion object exists when the maximum light flow frequency in an area other than a region corresponding to the current state of the vehicle is greater than a reference frequency in the light flow map generated by the light flow generation module. Vehicle unloading accident prevention system characterized in that.
  11. 상기 촬영장치로부터 제공되는 촬영영상의 관심영역에서 변화 발생 객체에 대한 특징점을 검출하는 제1 단계와,A first step of detecting a feature point for a change-occurring object in a region of interest of the captured image provided from the photographing apparatus;
    상기 제1 단계에서 검출된 현재 영상프레임의 특징점을 근거로 이전 영상프레임의 특징점을 추정하고, 두 영상 프레임의 특징점간 위치 관계를 근거로 이동방향에 대응되는 광흐름을 획득함과 더불어, 획득된 광흐름을 근거로 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 제2 단계,The feature points of the previous image frames are estimated based on the feature points of the current image frames detected in the first step, and the light flow corresponding to the moving direction is obtained based on the positional relationship between the feature points of the two image frames. A second step of generating a light flow vector including a magnitude corresponding to a movement change distance with respect to the object and an angle corresponding to the movement direction based on the light flow;
    상기 제2 단계에서 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하는 제3 단계,A third step of generating a light flow map having a light flow size axis and an angular axis based on the light flow vector provided in the second step,
    상기 제3 단계에서 생성된 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 제4 단계,A fourth step of calculating a light flow frequency for a predetermined region in the light flow map generated in the third step,
    상기 제4 단계에서 광흐름 빈도수가 최대인 영역의 위치를 근거로 움직임 객체를 검출하는 제5 단계를 포함하여 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.And a fifth step of detecting the moving object based on the position of the region having the maximum light flow frequency in the fourth step.
  12. 제11항에 있어서,The method of claim 11,
    상기 제2 단계는 광흐름 정보가 포함된 영상을 상방시선 좌표변환처리를 수행하고, 이 상방시선 영상에서 상방시선 광흐름 벡터를 생성하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.In the second step, upward gaze coordinate transformation is performed on an image including light flow information, and the upward gaze light flow vector is generated from the upward gaze image.
  13. 제11항에 있어서,The method of claim 11,
    상기 제2 단계는 상기 광흐름 벡터의 크기가 기 설정된 유효 기준 픽셀 미만인 광흐름 벡터만 유효 광흐름 벡터로 설정하여 이후 움직임 객체 검출에 적용하도록 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.The second step may be configured to set only the light flow vector having the size of the light flow vector less than the preset effective reference pixel as the effective light flow vector and to be applied to the subsequent detection of the moving object. .
  14. 제11항에 있어서,The method of claim 11,
    상기 제3 단계는 전체 영상프레임에 대한 광흐름 벡터를 제1 광흐름 맵을 생성함과 더불어, 영상프레임에서 기 설정된 관심영역에 대한 광흐름 벡터를 근거로 제2 광흐름 맵을 생성하고, In the third step, the light flow vector of the entire image frame is generated from the first light flow map, and the second light flow map is generated based on the light flow vector of the region of interest preset in the image frame.
    상기 제4 단계는 상기 제1 및 제2 광흐름 맵에 대해 기 설정된 영역별 광흐름 빈도수를 각각 산출하며,In the fourth step, the light flow frequency for each region is calculated for the first and second light flow maps, respectively.
    상기 제5 단계는 상기 제1 광흐름 맵에 의해 산출된 광흐름 빈도수가 최대인 영역을 근거로 차량의 정차, 전진, 후진 상태를 판단하고, 상기 제2 광흐름 맵에 의해 산출된 광흐름 빈도수가 기준 빈도수 이상인 영역이 상기 차량 상태 영역 이외의 영역에 존재하는 경우, 움직임 객체가 존재한다고 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법. In the fifth step, the vehicle stop, forward, and reverse states of the vehicle are determined based on a region having the maximum light flow frequency calculated by the first light flow map, and the light flow frequency calculated by the second light flow map. The motion object detection method of the photographed image, characterized in that it is determined that there is a moving object when the area having a frequency equal to or greater than a reference frequency exists in an area other than the vehicle state area.
  15. 제14항에 있어서,The method of claim 14,
    상기 제5 단계는 상기 제2 광흐름 맵에 의해 산출된 광흐름 빈도수가 기준 빈도 수 이상인 영역이 상기 차량 상태 영역 이외의 영역에 존재하지 않는 경우, 제1 광흐름 맵에 의해 산출된 차량 상태에 대응되는 움직임 객체 판단영역에 대한 움직임 객체 존재 여부를 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.The fifth step is based on the vehicle state calculated by the first light flow map when an area in which the light flow frequency calculated by the second light flow map is greater than or equal to a reference frequency does not exist in an area other than the vehicle state area. The moving object detection method of the photographed image, characterized in that for determining whether there is a moving object for the corresponding moving object determination region.
  16. 제11항 또는 제14항 또는 제15항 중 어느 한 항에 있어서,The method according to any one of claims 11 or 14 or 15,
    상기 제5 단계는 크기축과 각도축으로 이루어지는 제1 광흐름 맵에서 광흐름의 크기가 기준 픽셀 미만인 영역을 정차상태 영역으로 설정하고, 최소 기준 각도 범위로서 각도의 변화가 적은 영역을 전진상태 영역으로 설정하며, 최대 기준 각도 범위로서 각도의 변화가 큰 영역을 후진상태 영역으로 설정하여, 차량의 현재 상태를 판단하도록 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.In the fifth step, the region where the magnitude of the light flow is less than the reference pixel in the first light flow map including the size axis and the angle axis is set as the stationary state area, and the area where the change in the angle is small as the minimum reference angle range is the advanced state area. And determining a current state of the vehicle by setting a region having a large change in angle as a maximum reference angle range to a backward state region.
  17. 제11항 또는 제14항 또는 제15항 중 어느 한 항에 있어서,The method according to any one of claims 11 or 14 or 15,
    상기 제5 단계는 움직임 객체가 검출되지 않는 경우, 제1 광흐름 맵에서 최고 빈도수 영역과, 제2 광흐름 맵에서 최고 빈도수 영역이 서로 상이한 경우 움직임 객체가 존재하는 것으로 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법. In the fifth step, when the moving object is not detected, when the highest frequency area is different from the first light flow map and the highest frequency area is different from each other, the image is determined to be present. Moving object detection method in the image.
  18. 제11항 또는 제14항 또는 제15항 중 어느 한 항에 있어서,The method according to any one of claims 11 or 14 or 15,
    상기 제5 단계는 광흐름 맵에서 각도 변화가 최대 기준각도범위인 영역과 각도변화가 최소 기준각도 범위인 영역 이외의 영역에 대해 일정 빈도수 이상의 광흐름이 지속하여 검출되는 경우 끼임 객체가 발생한 것으로 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법. In the fifth step, it is determined that a jammed object has occurred when the light flow is continuously detected at a predetermined frequency or more in a region other than the region where the angle change is the maximum reference angle range and the region where the angle change is the minimum reference angle range in the light flow map. Moving object detection method in the captured image, characterized in that.
  19. 차량으로부터 차량의 현재 상태를 제공받는 제51 단계와,A 51 st step of receiving a current state of the vehicle from the vehicle;
    차량에 설치된 촬영장치로부터 제공되는 촬영영상의 관심영역에서 객체에 대한 특징점을 검출하는 제52 단계,(52) detecting a feature point of the object in the ROI of the captured image provided from the photographing apparatus installed in the vehicle;
    상기 제52 단계에서 검출된 현재 영상 프레임의 관심영역 특징점을 근거로 이전 영상프레임의 관심영역 특징점을 추정하고, 두 영상 프레임의 관심영역 특징점간 위치 관계를 근거로 이동방향에 대응되는 광흐름을 획득함과 더불어, 획득된 광흐름을 근거로 해당 객체에 대한 이동 변화 거리에 대응되는 크기 및 이동 방향에 대응되는 각도를 포함하는 광흐름 벡터를 생성하는 제53 단계,The ROI feature points of the previous image frame are estimated based on the ROI feature points of the current image frame detected in step 52, and the light flow corresponding to the moving direction is obtained based on the positional relationship between the ROI feature points of the two image frames. And generating a light flow vector including a magnitude corresponding to the movement change distance with respect to the object and an angle corresponding to the movement direction based on the obtained light flow.
    상기 제53 단계에서 제공되는 광흐름 벡터를 근거로 광흐름 크기축과 각도축으로 이루어지는 광흐름 맵을 생성하는 제54 단계,A 54th step of generating a light flow map having a light flow size axis and an angular axis based on the light flow vector provided in step 53;
    상기 제54 단계에서 생성된 광흐름 맵에서 기 설정된 영역에 대한 광흐름 빈도수를 산출하는 제55 단계 및,A 55th step of calculating a light flow frequency for a predetermined region in the light flow map generated in step 54;
    상기 제55 단계에서 상기 제51 단계에서 제공되는 차량 상태에 대응되는 영역 이외의 영역에서 광흐름 빈도수가 기준 빈도수 이상인 영역이 존재하는 경우 움직임 객체가 존재한다고 판단하는 제56 단계를 포함하여 구성되는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.And a fifty-sixth step of determining that a moving object exists when there is an area having a light flow frequency greater than or equal to the reference frequency in an area other than the area corresponding to the vehicle state provided in the fifty-first step in the fifty-seventh step. A moving object detection method in a captured image.
  20. 제19항에 있어서,The method of claim 19,
    상기 제56 단계에서 정차상태는 광흐름 맵에서 광흐름의 크기가 기준 픽셀 미만인 영역으로 설정하고, 전진상태는 기 설정된 최소 기준 각도 범위로서 각도의 변화가 적은 영역으로 설정하며, 후진상태는 기 설정된 최대 기준 각도 범위로서 각도의 변화가 큰 영역으로 설정하여 차량 상태에 대응되는 영역을 판단하는 것을 특징으로 하는 촬영영상에서의 움직임 객체 검출 방법.In the 56th step, the stop state is set to an area in which the size of the light flow is less than the reference pixel in the light flow map, the advance state is set to an area having a small change in angle as a preset minimum reference angle range, and the reverse state is set in advance. And a region corresponding to a vehicle state by setting the region having a large change in angle as a maximum reference angle range.
PCT/KR2015/013119 2015-11-05 2015-12-03 Method for detecting moving object in photographed image, and boarding and alighting accident prevention system using same WO2017078213A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2015-0154989 2015-11-05
KR1020150154989A KR101697520B1 (en) 2015-11-05 2015-11-05 Method for cognition of movement object in photographing image and system for prevention of vehicle boarding accident

Publications (1)

Publication Number Publication Date
WO2017078213A1 true WO2017078213A1 (en) 2017-05-11

Family

ID=58151700

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/013119 WO2017078213A1 (en) 2015-11-05 2015-12-03 Method for detecting moving object in photographed image, and boarding and alighting accident prevention system using same

Country Status (2)

Country Link
KR (1) KR101697520B1 (en)
WO (1) WO2017078213A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
GB2572006A (en) * 2018-03-16 2019-09-18 Continental Automotive Gmbh Vehicle alighting assistance device
WO2019190171A1 (en) * 2018-03-30 2019-10-03 삼성전자주식회사 Electronic device and control method therefor
CN110723096A (en) * 2018-07-17 2020-01-24 通用汽车环球科技运作有限责任公司 System and method for detecting clamped flexible material
CN111401114A (en) * 2019-02-12 2020-07-10 深圳市艾为智能有限公司 Transverse object detection device and method based on limited optical flow field

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102471006B1 (en) * 2018-01-23 2022-11-25 현대자동차주식회사 Apparatus and method for passenger safety using the camera door
KR20200037657A (en) * 2018-10-01 2020-04-09 삼성전자주식회사 Refrigerator, server and method for recognizing object thereof
KR102233606B1 (en) * 2019-02-21 2021-03-30 한국과학기술원 Image processing method and apparatus therefor
CN110598668A (en) * 2019-09-20 2019-12-20 深圳市豪恩汽车电子装备股份有限公司 Motor vehicle blind area detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050096865A (en) * 2005-08-24 2005-10-06 한민홍 Development of public bus safety system for disembarking passengers
KR20080073933A (en) * 2007-02-07 2008-08-12 삼성전자주식회사 Object tracking method and apparatus, and object pose information calculating method and apparatus
KR20140108828A (en) * 2013-02-28 2014-09-15 한국전자통신연구원 Apparatus and method of camera tracking
KR101487165B1 (en) * 2013-07-26 2015-01-28 아진산업(주) Safety system for getting on/off vihicle of passenger

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001239832A (en) * 2000-02-29 2001-09-04 Matsushita Electric Ind Co Ltd Device for warning in opening vehicle door
KR101283965B1 (en) * 2010-12-23 2013-07-09 전자부품연구원 Adaptive Accident Detecting System And Method thereof
KR101270602B1 (en) 2011-05-23 2013-06-03 아진산업(주) Method for providing around view of vehicle
JP5442701B2 (en) 2011-11-28 2014-03-12 シャープ株式会社 Image processing method, image processing apparatus, image forming apparatus including the same, image reading apparatus, program, and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050096865A (en) * 2005-08-24 2005-10-06 한민홍 Development of public bus safety system for disembarking passengers
KR20080073933A (en) * 2007-02-07 2008-08-12 삼성전자주식회사 Object tracking method and apparatus, and object pose information calculating method and apparatus
KR20140108828A (en) * 2013-02-28 2014-09-15 한국전자통신연구원 Apparatus and method of camera tracking
KR101487165B1 (en) * 2013-07-26 2015-01-28 아진산업(주) Safety system for getting on/off vihicle of passenger

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PARK, S. R. ET AL.: "Development of a Boarding Safety Assistance System Using Moving Object Detection Algorithm", PRODEEDINGS OF KSPE 2015 SPRING CONFERENCE, 15 May 2015 (2015-05-15) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316275A (en) * 2017-06-08 2017-11-03 宁波永新光学股份有限公司 A kind of large scale Microscopic Image Mosaicing algorithm of light stream auxiliary
GB2572006A (en) * 2018-03-16 2019-09-18 Continental Automotive Gmbh Vehicle alighting assistance device
WO2019190171A1 (en) * 2018-03-30 2019-10-03 삼성전자주식회사 Electronic device and control method therefor
US11430137B2 (en) 2018-03-30 2022-08-30 Samsung Electronics Co., Ltd. Electronic device and control method therefor
CN110723096A (en) * 2018-07-17 2020-01-24 通用汽车环球科技运作有限责任公司 System and method for detecting clamped flexible material
CN110723096B (en) * 2018-07-17 2022-05-03 通用汽车环球科技运作有限责任公司 System and method for detecting clamped flexible material
CN111401114A (en) * 2019-02-12 2020-07-10 深圳市艾为智能有限公司 Transverse object detection device and method based on limited optical flow field
CN111401114B (en) * 2019-02-12 2023-09-19 深圳市艾为智能有限公司 Method for detecting crossing object based on limited optical flow field crossing object detection device

Also Published As

Publication number Publication date
KR101697520B1 (en) 2017-02-02

Similar Documents

Publication Publication Date Title
WO2017078213A1 (en) Method for detecting moving object in photographed image, and boarding and alighting accident prevention system using same
US9785842B2 (en) Safety alarm system and method for vehicle
US20090128632A1 (en) Camera and image processor
WO2018098915A1 (en) Control method of cleaning robot, and cleaning robot
WO2017065347A1 (en) Guard drone and mobile guard system using same
JP2009143722A (en) Person tracking apparatus, person tracking method and person tracking program
WO2014051337A1 (en) Apparatus and method for detecting event from plurality of photographed images
WO2017204406A1 (en) Device and method for detecting eye position of driver, and imaging device having rolling shutter driving type image sensor and lighting control method therefor
WO2021172833A1 (en) Object recognition device, object recognition method and computer-readable recording medium for performing same
KR101660254B1 (en) recognizing system of vehicle number for parking crossing gate
KR20190011495A (en) Regulation vehicle for enhancing regulating accuracy and efficiency
JP6046283B1 (en) Elevator door system
JP6295191B2 (en) Elevator car image monitoring device
KR101219407B1 (en) escalator monitoring system using image processing technology
JP2014084064A (en) Platform door monitoring device and platform door monitoring method
WO2018097384A1 (en) Crowdedness notification apparatus and method
JP2014055925A (en) Image processor, object detection method, and object detection program
KR102570973B1 (en) Unattended station monitoring system and operation method thereof
KR101266015B1 (en) Monitoring System for Safety Line and railroad track of platform
JP5484224B2 (en) Device for detecting string-like foreign matter on doors of railway vehicles
WO2019098729A1 (en) Vehicle monitoring method and device
JP2013052738A (en) Detector for rushing-into-train
WO2015174724A1 (en) Device and method for three-dimensionally correcting image
WO2014073779A1 (en) System for recognizing accident of delivery vehicle
KR102063957B1 (en) Method and system for prevent working accident of fork lift

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15907876

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15907876

Country of ref document: EP

Kind code of ref document: A1