CN113420714A - Collected image reporting method and device and electronic equipment - Google Patents

Collected image reporting method and device and electronic equipment Download PDF

Info

Publication number
CN113420714A
CN113420714A CN202110785610.7A CN202110785610A CN113420714A CN 113420714 A CN113420714 A CN 113420714A CN 202110785610 A CN202110785610 A CN 202110785610A CN 113420714 A CN113420714 A CN 113420714A
Authority
CN
China
Prior art keywords
image
target vehicle
boundary line
position boundary
crosses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110785610.7A
Other languages
Chinese (zh)
Other versions
CN113420714B (en
Inventor
臧守涛
潘武
吴忠人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110785610.7A priority Critical patent/CN113420714B/en
Publication of CN113420714A publication Critical patent/CN113420714A/en
Application granted granted Critical
Publication of CN113420714B publication Critical patent/CN113420714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided are a collected image reporting method and device and electronic equipment. According to the method, when the target vehicle performs right turning, whether pedestrians and/or non-motor vehicles exist in the detection area or not is determined, so that whether the target vehicle performing right turning is the pedestrians and/or the non-motor vehicles can be judged, the target vehicle is determined to be the non-pedestrians and/or the non-motor vehicles under the condition that the pedestrians and/or the non-motor vehicles exist in the detection area, the third image is finally acquired, and therefore the illegal behaviors of the right-turning vehicle non-pedestrians and/or the non-motor vehicles are guaranteed to be accurately captured.

Description

Collected image reporting method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for reporting a collected image, and an electronic device.
Background
When the automobile turns, a driver uses the rearview mirror and the rearview mirror, the visual range is limited, and the visual blind area is in a certain range.
The occurrence of the blind zone is related to the difference between the inner wheels of the front and rear wheels, which is the difference between the turning radius of the front inner wheel and the turning radius of the rear inner wheel when the vehicle turns. When the vehicle is vertically viewed from a high position while turning, it is found that the vehicle is moved with the inner rear wheel as a fulcrum, and the regions covered by the front and rear wheels are actually different. The longer the vehicle body, the greater the turning width, and the greater the wheel difference formed. Pedestrians and non-motor vehicles are small in size, and are prone to falling into danger once the pedestrians and the non-motor vehicles step into the range of vision blind areas of a driver due to the difference of the inner wheels.
From the perspective of road rights, motor vehicles, non-motor vehicles and pedestrians have the right of passing through an intersection when the arrow lamps of straight going and right turning are simultaneously on. Therefore, the traffic conflict phenomenon between the right-turning motor vehicle and the straight-going non-motor vehicle and the pedestrian inevitably occurs, and the problem of who gives the right-turning motor vehicle is caused. Therefore, how to accurately shoot the image of the right-turning vehicle is an urgent problem to be solved.
Disclosure of Invention
The application provides a collected image reporting method and device and electronic equipment, which are used for accurately capturing abnormal behaviors of a target vehicle. The specific scheme is as follows:
in a first aspect, the present application provides a method for reporting a collected image, where the method includes:
when the boundary of a target image frame in the acquired image crosses a first position boundary line, acquiring a first image containing a target vehicle, wherein the target image frame is obtained by framing the target vehicle in the image;
determining whether the target vehicle crosses a second position boundary line, wherein the second position boundary line is a parallel line having a preset distance from the first position boundary line;
when the boundary of the target image frame crosses the second position boundary line, acquiring a second image containing the target vehicle;
determining whether a center point of the target vehicle crosses a third position boundary line, wherein the third position boundary line includes a reference line for the target vehicle to convert a driving direction to a preset direction;
and if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image.
By the method, when the target vehicle travels, the first image, the second image and the third image which comprise the target vehicle are collected at each path position, and the first image, the second image and the third image are reported, so that the illegal behaviors of the target vehicle in the traveling process are accurately captured.
In one possible design, the capturing a third image containing the target vehicle and sending the first image, the second image, and the third image to a designated server includes:
determining whether a designated target object exists in a detection area, wherein the designated target object comprises pedestrians and/or non-motor vehicles;
and if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image.
Through the mode, when the target vehicle travels, whether pedestrians and/or non-motor vehicles exist in the detection area or not is determined, so that the pedestrians and/or the non-motor vehicles are given away in the traveling process of the target vehicle, under the condition that the pedestrians and/or the non-motor vehicles exist in the detection area, the pedestrians and/or the non-motor vehicles which are not given away in the target vehicle are determined, the third image is finally acquired, and accurate snapshot of illegal behaviors of the pedestrians and/or the non-motor vehicles in the traveling process of the target vehicle is guaranteed.
In one possible design, the capturing a first image including at least the target vehicle when a boundary of the target image frame in the captured image crosses a first position boundary line includes:
detecting whether the specified target object exists at the right front position of the target vehicle when the target vehicle in the image to be processed is determined to cross a first position boundary line;
if yes, judging whether the specified target object crosses the second position boundary line;
and if the specified target object does not cross the second position boundary line, acquiring a first image at least comprising a target vehicle, and binding the target vehicle and the specified target object.
By the method, the target vehicle and the specified target object can be bound under different scenes, so that the target vehicle and the specified target object can be accurately captured according to the binding relationship.
In one possible design, the capturing a second image containing the target vehicle when the boundary of the target image frame crosses the second position boundary line includes:
detecting whether the specified target object bound with the target vehicle crosses the second position boundary line when the target vehicle crosses the second position boundary line;
and if the specified target object crosses a second position boundary line, acquiring a second image containing the target vehicle.
In a second aspect, the present application further provides an apparatus for reporting collected images, where the apparatus includes:
the first acquisition module is used for acquiring a first image containing a target vehicle when the boundary of a target image frame in the acquired image crosses a first position boundary line, wherein the target image frame is obtained by framing the target vehicle in the image;
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image containing a target vehicle when the target vehicle in an image to be processed is determined to cross a first position boundary line;
a first determination module for determining whether the target vehicle crosses a second position boundary line, wherein the second position boundary line and the first position boundary line are parallel lines having a preset distance;
the second acquisition module is used for acquiring a second image containing the target vehicle when the target vehicle crosses the second position boundary line;
a second determination module to determine whether a center point of the target vehicle crosses a third position boundary line;
and the acquisition reporting module is used for acquiring a third image containing the target vehicle and reporting the first image, the second image and the third image when the central point of the target vehicle is determined to cross a third position boundary line, wherein the third position boundary line comprises a reference line for converting the driving direction of the target vehicle to a preset direction.
In one possible design, the acquisition reporting module is specifically configured to determine whether there are pedestrians and/or non-motor vehicles in the detection area; if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image to a designated server; and if not, cancelling the report of the collected image when the boundary of the target image frame crosses a third position boundary line.
In one possible design, the first acquiring module is specifically configured to detect whether a specified target object exists at a front right position of the target vehicle when a boundary of a target image frame in an acquired image crosses a first position boundary line, and if so, determine whether the specified target object crosses a second position boundary line; and if the specified target object does not cross the second position boundary line, acquiring a first image at least comprising a target vehicle, and binding the target vehicle and the specified target object, wherein the specified target object is a pedestrian and/or a non-motor vehicle.
In a possible design, the second collecting module is specifically configured to detect whether the designated target object bound to the target vehicle crosses the second position boundary line when the boundary of the target image frame crosses the second position boundary line; and if the specified target object crosses a second position boundary line, acquiring a second image containing the target vehicle.
In a third aspect, the present application further provides an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the collected image reporting method when executing the computer program stored in the memory.
In a fourth aspect, the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the collected image reporting method are implemented.
For each of the second to fourth aspects and possible technical effects of each aspect, please refer to the above description of the first aspect or the possible technical effects of each of the possible solutions in the first aspect, and no repeated description is given here.
Drawings
FIG. 1 is a schematic illustration of various positioning location lines provided herein;
fig. 2 is a flowchart of a collected image reporting method under a condition provided by the present application;
fig. 3 is a flowchart of a collected image reporting method in another scenario provided by the present application;
fig. 4 is a schematic structural diagram of an apparatus for reporting a collected image according to the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings. The particular methods of operation in the method embodiments may also be applied to apparatus embodiments or system embodiments. It should be noted that "a plurality" is understood as "at least two" in the description of the present application. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. A is connected with B and can represent: a and B are directly connected and A and B are connected through C. In addition, in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not intended to indicate or imply relative importance nor order to be construed.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The collected image reporting method provided by the embodiment of the application is used for realizing accurate snapshot and reporting of the image of the motor vehicle without giving the pedestrian or the non-motor vehicle a gift. The method and the device in the embodiment of the application are based on the same technical concept, and because the principles of the problems solved by the method and the device are similar, the device and the embodiment of the method can be mutually referred, and repeated parts are not repeated.
Firstly, in order to acquire the accuracy of the reported image, the motion states and motion tracks of the motor vehicle, the pedestrian and the non-motor vehicle need to be accurately analyzed, and at this time, the positions of specified target objects such as the motor vehicle, the non-motor vehicle and the pedestrian need to be accurately detected and positioned.
In order to achieve the above positioning purpose in the embodiment of the present application, before capturing an image, key position information is determined in the image, that is: the method comprises the steps that a first position boundary line, a second position boundary line, a third position boundary line and a fourth position boundary line are arranged in an image collected by video equipment, wherein the first position boundary line is a front line used for judging whether a target vehicle enters an image collecting area, the second position boundary line is a stop line which is an identification line of a vehicle stop position, and the first position boundary line and the second position boundary line in the collected image are parallel lines with a preset distance. The third position boundary line is a reference line which includes a target vehicle and converts the running direction of the target vehicle into a preset direction, wherein the preset direction can be the right direction of the target vehicle, namely the third position boundary line is a right-turn boundary line which is positioned on the right side of the front line and the stop line, and whether the target vehicle turns right can be judged through the right-turn boundary line; the fourth position boundary line is a rear line having a set vertical distance from the stop line, and the rear line is used to determine whether the target vehicle is traveling straight. Of course, the intermediate line and the lane line are also included in the image.
Referring to fig. 1, the boundary lines are set in fig. 1, and the trajectories and states of the vehicles can be accurately analyzed through the boundary lines, for example, when the boundary at the top of the boundary of the image corresponding to the vehicle crosses the first position boundary line, it indicates that the vehicle enters the scene of image acquisition, and when the boundary at the bottom of the boundary of the image corresponding to the vehicle crosses the first position boundary line, it indicates that the vehicle has completely entered the scene. Similarly, pedestrians and non-motor vehicles can be positioned by the boundary line.
In the embodiment of the application, different modes can be adopted to snapshot the target vehicle aiming at different application scenes, and the specific scheme is as follows:
the first embodiment is as follows:
based on the above description, in order to accurately snapshot the behavior of the motor vehicle that does not give the best to pedestrians or non-motor vehicles, the present application provides an image collecting and reporting method, and as shown in fig. 2, the implementation flow of the method is as follows:
s21, when it is determined that the target vehicle in the image to be processed crosses the first position boundary line, acquiring a first image containing the target vehicle;
before image acquisition and reporting, the state of the signal lamp under the current road condition needs to be determined through image analysis or other methods, and if the current signal lamp does not allow vehicles to pass, step S26 is performed.
If the current signal lamp allows the vehicle to pass, continuing image acquisition and reporting, and when image acquisition is carried out, determining a corresponding target in the acquired image to be processed, namely framing the target vehicle in the image. After the framing is completed, an image of the target vehicle may be obtained, or a frame of the target image surrounding the target vehicle may be obtained. In the following embodiments, the description will be made by way of a target image frame, but it is not limited to the case where only the target image frame can be used.
After a target image frame of a target vehicle is obtained, whether the bottom boundary of the target image frame crosses a first position boundary line or not is determined in real time, namely whether the target vehicle completely enters a picture or not is determined, and if yes, a first image containing the target vehicle is collected; if not, continuing monitoring until the target vehicle completely enters the picture.
S22, determining whether the target vehicle crosses a second position boundary line;
after the acquisition of the first image is completed, whether the top boundary of the target image frame corresponding to the target vehicle crosses the second position boundary line, that is, whether the top boundary crosses the stop line, is continuously monitored, and if the top boundary of the target image frame crosses the stop line, the step S23 is executed. And if the top boundary of the target image frame does not cross the stop line, continuously judging whether the boundary of the target frame crosses the second position boundary line.
S23, acquiring a second image containing the target vehicle;
s24, determining whether the center point of the target vehicle crosses a third position boundary line;
after the acquisition of the second image is completed, the target vehicle in the target image frame is continuously tracked, and the central point of the target image frame is determined. It is detected whether the center point of the target image frame crosses the third position boundary line. The center point of the target vehicle may be used to determine whether the target vehicle crosses the third position boundary line, in addition to the center point of the target image frame, and a specific implementation manner is not limited in the present application.
If the center point of the target frame crosses the third position boundary line, step S25 is executed, and if the center point of the target frame does not cross the third position boundary line, step S26 is executed.
S25, collecting a third image containing the target vehicle, and reporting the first image, the second image and the third image;
when the fact that the center point of the target image frame corresponding to the target vehicle has crossed the third position boundary line is detected, the target vehicle is indicated to run in a right turn, and a third image containing the target vehicle is collected. And then reporting the first image, the second image and the third image to a designated server or sending the background of the traffic monitoring system. The target vehicle information can be identified through the three images, and the danger degree of the current scene can also be identified.
By the mode, the images collected by the target vehicle at all positions can be reported when the target vehicle rotates right, so that the target vehicle can be accurately captured and evidence can be obtained.
Further, in the embodiment of the present application, in order to improve the accuracy of the reported image, when it is detected that the center point of the target image frame crosses the third position boundary line, it is determined whether a specified target object exists in the detection area, where the specified target object may be a pedestrian and/or a non-motor vehicle, and the detection area is set as shown in fig. 1, where in fig. 1, the detection area is set in front of the stop line.
If the pedestrian and/or the non-motor vehicle are/is detected in the detection area, a third image containing the target vehicle is acquired, and finally the first image, the second image and the third image are reported to a specified server.
If no pedestrian and/or non-motor vehicle is detected in the detection area, step S26 is executed.
S26, detecting whether the target vehicle crosses a fourth boundary line;
detecting whether the boundary of the target image frame corresponding to the target vehicle crosses the fourth position boundary line, if so, determining that the target vehicle is in a straight line, and executing step S27; if the boundary of the target image frame does not cross the fourth position boundary line, the process returns to step S24.
And S27, canceling image acquisition and reporting.
By the method, when the target vehicle performs right turning, whether pedestrians and/or non-motor vehicles exist in the detection area or not is determined, so that whether the target vehicle performing right turning is present or not can be determined, the target vehicle is determined to be not present in the pedestrians and/or the non-motor vehicles under the condition that the pedestrians and/or the non-motor vehicles exist in the detection area, the third image is finally acquired, and accurate snapshot of illegal behaviors of the pedestrians and/or the non-motor vehicles performing right turning is guaranteed.
Example two:
the application also provides a collected image reporting method, and as shown in fig. 3, the implementation flow of the method is as follows:
s31, when the target vehicle in the image to be processed is determined to cross the first position boundary line, detecting whether a specified target object exists at the right front position of the target vehicle;
before image acquisition and reporting, the state of the signal lamp under the current road condition needs to be determined through image analysis or other methods, and if the current signal lamp does not allow vehicles to pass, step S37 is executed.
If the current signal lamp allows the vehicle to pass, continuing image acquisition and reporting, and when image acquisition is carried out, determining a corresponding target in the acquired image to be processed, namely framing the target vehicle in the image. After the framing is completed, an image of the target vehicle may be obtained, or a frame of the target image surrounding the target vehicle may be obtained. In the following embodiments, the description will be made by way of a target image frame, but it is not limited to the case where only the target image frame can be used.
After the target image frame of the target vehicle is obtained, whether the bottom boundary of the target image frame crosses the first position boundary line or not is determined in real time, namely whether the target vehicle completely enters the picture or not is determined, and if yes, a first image containing the target vehicle is collected. If not, continuing monitoring until the target vehicle completely enters the picture.
Whether a specified target object exists in a right front area of a target vehicle is detected after whether a bottom boundary of the target vehicle crosses a first position boundary line. The designated target object here is a pedestrian and/or a non-motor vehicle. Here, the right front area of the target vehicle is specifically: and taking the center of the target vehicle as a coordinate origin, and the coordinates of the center point of the right front area are (x, y), wherein x and y are positive values.
If the designated target object exists, designating step S32; if there is no designated target object, step S37 is executed.
S32, determining whether the specified target object crosses the second position boundary line;
if a designated target object is present on the front right of the target vehicle, it is determined whether or not the designated target object has crossed the stop line, and if so, step S39 is executed.
If the designated target object does not cross the stop line, step S33 is executed.
S33, collecting a first image at least containing a target vehicle, and binding the target vehicle with the specified target object;
under the condition that the specified target object does not cross the stop line, a first image containing the target vehicle is collected, and the relation between the target vehicle and the pedestrian and/or the non-motor vehicle is bound. In the subsequent image capturing, the target vehicles and pedestrians and/or non-motor vehicles with binding relations are pointed out.
S34, determining whether the target vehicle crosses a second position boundary line;
after the acquisition of the first image is completed, whether the top boundary of the target image frame corresponding to the target vehicle crosses the second position boundary line, that is, whether the top boundary crosses the stop line, is continuously monitored, and if the top boundary of the target image frame crosses the stop line and the bound specified target object also crosses the stop line, step S35 is executed. And if the top boundary of the target image frame does not cross the stop line, continuously judging whether the boundary of the target frame crosses the second position boundary line.
S35, acquiring a second image containing the target vehicle;
s36, determining whether the center point of the target vehicle crosses a third position boundary line;
after the acquisition of the second image is completed, the target vehicle in the target image frame is continuously tracked, and the central point of the target image frame is determined. And detecting whether the center point of the target image frame corresponding to the target vehicle crosses a third position boundary line.
If the center point of the target frame crosses the third position boundary line, step S37 is executed, and if the center point of the target frame does not cross the third position boundary line, step S38 is executed.
S37, collecting a third image containing the target vehicle, and reporting the first image, the second image and the third image to a designated server;
s38, detecting whether the target vehicle crosses a fourth boundary line;
detecting whether the boundary of the target image frame corresponding to the target vehicle crosses the fourth position boundary line, if so, determining that the target vehicle is in a straight line, and executing step S39; if the boundary of the target image frame does not cross the fourth position boundary line, the process returns to step S36.
And S39, canceling the report of the collected image.
When the fact that the center point of the target image frame corresponding to the target vehicle has crossed the third position boundary line is detected, it indicates that the target vehicle is performing right turn, and a third image containing the target vehicle is acquired. And then reporting the first image, the second image and the third image to a designated server.
By the mode, the images collected by the target vehicle at all positions can be reported when the target vehicle performs right turn, so that the target vehicle is accurately captured and evidence is obtained.
Further, in the embodiment of the present application, in order to improve the accuracy of the reported image, when it is detected that the center point of the target image frame crosses the third position boundary line, it is determined whether there is a pedestrian and/or a non-motor vehicle in a detection area, the setting of the detection area is shown in fig. 1, and in fig. 1, the detection area is set in front of the stop line.
If the pedestrian and/or the non-motor vehicle are/is detected in the detection area, a third image containing the target vehicle is acquired, and finally the first image, the second image and the third image are reported to a specified server.
By the method, when the target vehicle performs right turning, whether pedestrians and/or non-motor vehicles exist in the detection area or not is determined, so that whether the target vehicle performing right turning is present or not can be determined, the target vehicle is determined to be not present in the pedestrians and/or the non-motor vehicles under the condition that the pedestrians and/or the non-motor vehicles exist in the detection area, the third image is finally acquired, and accurate snapshot of illegal behaviors of the pedestrians and/or the non-motor vehicles performing right turning is guaranteed.
Based on the same inventive concept, an embodiment of the present application further provides an acquired image reporting device, where the acquired image reporting device is configured to accurately capture illegal activities of right-turning vehicles such as pedestrians and/or non-motor vehicles, and as shown in fig. 4, the device includes:
the first acquisition module 401 is configured to acquire a first image including a target vehicle when it is determined that the target vehicle crosses a first position boundary line in an image to be processed;
a first determination module 402 for determining whether the target vehicle crosses a second position boundary line, wherein the second position boundary line has a preset vertical distance from the first position boundary line;
a second collecting module 403, configured to collect a second image containing the target vehicle when the target vehicle crosses the second position boundary line;
a second determination module 404 for determining whether the center point of the target vehicle crosses a third position boundary line;
a collecting and reporting module 405, configured to collect a third image including the target vehicle when it is determined that the center point of the target vehicle crosses a third position boundary line, and report the first image, the second image, and the third image, where the third position boundary line includes a reference line for the target vehicle to convert the driving direction into a preset direction.
In one possible design, the acquisition reporting module 405 is specifically configured to determine whether a specified target object exists in a preset detection area; if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image; and if not, canceling the image acquisition reporting when the target vehicle crosses a fourth position boundary line, wherein the specified target object comprises pedestrians and/or non-motor vehicles.
In a possible design, the first acquiring module 401 is specifically configured to, when it is determined that a target vehicle in an image to be processed crosses a first position boundary line, detect whether the specified target object exists at a front right position of the target vehicle, and if so, determine whether the specified target object crosses a second position boundary line; and if the specified target object does not cross the second position boundary line, acquiring a first image at least comprising a target vehicle, and binding the target vehicle and the specified target object.
In one possible design, the second collecting module 402 is specifically configured to detect whether the designated target object bound to the target vehicle crosses the second position boundary line when the target vehicle crosses the second position boundary line; and if the specified target object crosses a second position boundary line, acquiring a second image containing the target vehicle.
By the aid of the device, when the target vehicle turns right, whether pedestrians and/or non-motor vehicles exist in the detection area is determined, so that whether the target vehicle turning right gives a gift to the pedestrians and/or the non-motor vehicles can be judged, under the condition that the pedestrians and/or the non-motor vehicles exist in the detection area, the pedestrians and/or the non-motor vehicles are determined to be not given gift to the target vehicle, the third image is finally acquired, and therefore it is guaranteed that illegal behaviors of the pedestrians and/or the non-motor vehicles turning right are accurately captured.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, where the electronic device can implement the function of the output device in the foot falling area, and with reference to fig. 5, the electronic device includes:
at least one processor 501 and a memory 502 connected to the at least one processor 501, in this embodiment, a specific connection medium between the processor 501 and the memory 502 is not limited in this application, and fig. 5 illustrates an example where the processor 501 and the memory 502 are connected through a bus 500. The bus 500 is shown in fig. 5 by a thick line, and the connection manner between other components is merely illustrative and not limited thereto. The bus 500 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 5 for ease of illustration, but does not represent only one bus or one type of bus. Alternatively, the processor 501 may also be referred to as a controller, without limitation to name a few.
In the embodiment of the present application, the memory 502 stores instructions executable by the at least one processor 501, and the at least one processor 501 can execute the image capturing and reporting method discussed above by executing the instructions stored in the memory 502. The processor 501 may implement the functions of the various modules in the apparatus shown in fig. 4.
The processor 501 is a control center of the apparatus, and can connect various parts of the whole apparatus for reporting the collected images by using various interfaces and lines, and by operating or executing instructions stored in the memory 502 and calling data stored in the memory 502, various functions and processing data of the apparatus are performed, so that the apparatus is monitored as a whole.
In one possible design, processor 501 may include one or more processing units and processor 501 may integrate an application processor that handles primarily operating systems, user interfaces, application programs, and the like, and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501. In some embodiments, processor 501 and memory 502 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 501 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the collected image reporting method disclosed by the embodiment of the application can be directly implemented by a hardware processor, or implemented by combining hardware and software modules in the processor.
Memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 502 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 502 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 502 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
By programming the processor 501, the codes corresponding to the output method of the foot-down area described in the foregoing embodiment may be solidified in the chip, so that the steps of the collected image reporting method in the embodiments shown in fig. 2 and fig. 3 can be executed when the chip runs. How to program the processor 501 is well known to those skilled in the art and will not be described in detail herein.
Based on the same inventive concept, an embodiment of the present application further provides a storage medium, where the storage medium stores computer instructions, and when the computer instructions are run on a computer, the computer executes the method for reporting the captured image discussed above.
In some possible embodiments, the aspects of the collected image reporting method provided in this application may also be implemented in the form of a program product, which includes program code for causing the control device to perform the steps of the collected image reporting method according to various exemplary embodiments of this application described above in this specification when the program product is run on an apparatus.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for reporting a captured image, the method comprising:
when it is determined that a target vehicle in an image to be processed crosses a first position boundary line, acquiring a first image containing the target vehicle;
determining whether the target vehicle crosses a second position boundary line; wherein the second position boundary line and the first position boundary line are parallel lines having a preset distance;
acquiring a second image containing the target vehicle when the target vehicle crosses the second position boundary line;
determining whether a center point of the target vehicle crosses a third position boundary line, wherein the third boundary line includes a reference line for the target vehicle to convert a driving direction to a preset direction;
and if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image.
2. The method of claim 1, wherein said capturing a third image containing the target vehicle and reporting the first image, the second image, and the third image comprises:
determining whether a specified target object exists in a preset detection area, wherein the specified target object comprises pedestrians and/or non-motor vehicles;
and if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image.
3. The method of claim 1, wherein capturing a first image containing a target vehicle in the pending image upon determining that the target vehicle crosses a first position boundary line comprises:
detecting whether the specified target object exists at the right front position of the target vehicle when the target vehicle in the image to be processed is determined to cross a first position boundary line;
if yes, judging whether the specified target object crosses the second position boundary line;
and if the specified target object does not cross the second position boundary line, acquiring a first image at least comprising a target vehicle, and binding the target vehicle and the specified target object.
4. The method of claim 3, wherein said capturing a second image containing the target vehicle as the target vehicle crosses the second position boundary line comprises:
detecting whether the specified target object bound with the target vehicle crosses the second position boundary line when the target vehicle crosses the second position boundary line;
and if the specified target object crosses a second position boundary line, acquiring a second image containing the target vehicle.
5. The collected image reporting device is characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image containing a target vehicle when the target vehicle in an image to be processed is determined to cross a first position boundary line;
a first determination module for determining whether the target vehicle crosses a second position boundary line, wherein the second position boundary line and the first position boundary line are parallel lines having a preset distance;
the second acquisition module is used for acquiring a second image containing the target vehicle when the target vehicle crosses the second position boundary line;
a second determination module to determine whether a center point of the target vehicle crosses a third position boundary line;
and the acquisition reporting module is used for acquiring a third image containing the target vehicle and reporting the first image, the second image and the third image when the central point of the target vehicle is determined to cross a third position boundary line, wherein the third position boundary line comprises a reference line for converting the driving direction of the target vehicle to a preset direction.
6. The apparatus according to claim 5, wherein the acquisition reporting module is specifically configured to determine whether a specified target object exists in a preset detection area; and if so, acquiring a third image containing the target vehicle, and reporting the first image, the second image and the third image, wherein the specified target object comprises a pedestrian and/or a non-motor vehicle.
7. The apparatus according to claim 5, wherein the first capturing module is specifically configured to, when it is determined that a target vehicle in the image to be processed crosses a first position boundary line, detect whether the designated target object exists at a front right position of the target vehicle, and if so, determine whether the designated target object crosses the second position boundary line; and if the specified target object does not cross the second position boundary line, acquiring a first image at least comprising a target vehicle, and binding the target vehicle and the specified target object.
8. The apparatus of claim 7, wherein the second acquisition module is specifically configured to detect whether the designated target object bound to the target vehicle crosses the second location boundary line when the target vehicle crosses the second location boundary line; and if the specified target object crosses a second position boundary line, acquiring a second image containing the target vehicle.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-4 when executing the computer program stored on the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-4.
CN202110785610.7A 2021-07-12 2021-07-12 Method and device for reporting acquired image and electronic equipment Active CN113420714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110785610.7A CN113420714B (en) 2021-07-12 2021-07-12 Method and device for reporting acquired image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110785610.7A CN113420714B (en) 2021-07-12 2021-07-12 Method and device for reporting acquired image and electronic equipment

Publications (2)

Publication Number Publication Date
CN113420714A true CN113420714A (en) 2021-09-21
CN113420714B CN113420714B (en) 2023-08-22

Family

ID=77721652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110785610.7A Active CN113420714B (en) 2021-07-12 2021-07-12 Method and device for reporting acquired image and electronic equipment

Country Status (1)

Country Link
CN (1) CN113420714B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241748A (en) * 2021-11-22 2022-03-25 浙江大华技术股份有限公司 Method and device for identifying whether right-turning vehicle gives way to straight-going vehicle
CN114792408A (en) * 2022-06-21 2022-07-26 浙江大华技术股份有限公司 Motor vehicle snapshot method, motor vehicle snapshot device and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574995A (en) * 2015-01-09 2015-04-29 北京尚易德科技有限公司 System and method for recording behaviors that locomotives turning right at crossing go through red light and do not avoid pedestrians
GB201704231D0 (en) * 2016-03-21 2017-05-03 Ford Global Tech Llc Pedestrian detection and motion prediction with rear-facing camera
CN107622669A (en) * 2017-10-25 2018-01-23 哈尔滨工业大学 A kind of method for identifying right turning vehicle and whether giving precedence to pedestrian
CN110490108A (en) * 2019-08-08 2019-11-22 浙江大华技术股份有限公司 A kind of labeling method, device, storage medium and the electronic device of state violating the regulations
CN110689724A (en) * 2018-12-31 2020-01-14 上海眼控科技股份有限公司 Motor vehicle zebra crossing courtesy pedestrian automatic auditing method based on deep learning
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN112258848A (en) * 2020-12-16 2021-01-22 华录易云科技有限公司 Motor vehicle right-turning pedestrian-unfriendly snapshot and pedestrian crossing warning system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574995A (en) * 2015-01-09 2015-04-29 北京尚易德科技有限公司 System and method for recording behaviors that locomotives turning right at crossing go through red light and do not avoid pedestrians
GB201704231D0 (en) * 2016-03-21 2017-05-03 Ford Global Tech Llc Pedestrian detection and motion prediction with rear-facing camera
CN107622669A (en) * 2017-10-25 2018-01-23 哈尔滨工业大学 A kind of method for identifying right turning vehicle and whether giving precedence to pedestrian
CN110689724A (en) * 2018-12-31 2020-01-14 上海眼控科技股份有限公司 Motor vehicle zebra crossing courtesy pedestrian automatic auditing method based on deep learning
CN110490108A (en) * 2019-08-08 2019-11-22 浙江大华技术股份有限公司 A kind of labeling method, device, storage medium and the electronic device of state violating the regulations
CN111968378A (en) * 2020-07-07 2020-11-20 浙江大华技术股份有限公司 Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN112258848A (en) * 2020-12-16 2021-01-22 华录易云科技有限公司 Motor vehicle right-turning pedestrian-unfriendly snapshot and pedestrian crossing warning system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241748A (en) * 2021-11-22 2022-03-25 浙江大华技术股份有限公司 Method and device for identifying whether right-turning vehicle gives way to straight-going vehicle
CN114241748B (en) * 2021-11-22 2022-11-22 浙江大华技术股份有限公司 Method and device for identifying whether right-turning vehicle gives way to straight-going vehicle
CN114792408A (en) * 2022-06-21 2022-07-26 浙江大华技术股份有限公司 Motor vehicle snapshot method, motor vehicle snapshot device and computer storage medium

Also Published As

Publication number Publication date
CN113420714B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
US10685246B2 (en) Systems and methods for curb detection and pedestrian hazard assessment
CN112069643B (en) Automatic driving simulation scene generation method and device
JP5399027B2 (en) A device having a system capable of capturing a stereoscopic image to assist driving of an automobile
CN107179767B (en) Driving control device, driving control method, and non-transitory recording medium
Fossati et al. Real-time vehicle tracking for driving assistance
CN105825185A (en) Early warning method and device against collision of vehicles
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN102997900A (en) Vehicle systems, devices, and methods for recognizing external worlds
JP2002083297A (en) Object recognition method and object recognition device
JP2024023319A (en) Detection of emergency vehicle
CN109427191B (en) Driving detection method and device
CN111968378A (en) Motor vehicle red light running snapshot method and device, computer equipment and storage medium
CN113420714A (en) Collected image reporting method and device and electronic equipment
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN106023594A (en) Parking stall shielding determination method and device and vehicle management system
CN111583660B (en) Vehicle steering behavior detection method, device, equipment and storage medium
CN114202936A (en) Traffic command robot and control method thereof
CN113468911A (en) Vehicle-mounted red light running detection method and device, electronic equipment and storage medium
CN115123291B (en) Behavior prediction method and device based on obstacle recognition
CN110539748A (en) congestion car following system and terminal based on look around
CN115762153A (en) Method and device for detecting backing up
CN111460852A (en) Vehicle-mounted 3D target detection method, system and device
CN115359443A (en) Traffic accident detection method and device, electronic device and storage medium
CN114078318B (en) Vehicle violation detection system
CN114494938A (en) Non-motor vehicle behavior identification method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant