CN116052106A - Method for detecting falling object and electronic equipment - Google Patents

Method for detecting falling object and electronic equipment Download PDF

Info

Publication number
CN116052106A
CN116052106A CN202310095644.2A CN202310095644A CN116052106A CN 116052106 A CN116052106 A CN 116052106A CN 202310095644 A CN202310095644 A CN 202310095644A CN 116052106 A CN116052106 A CN 116052106A
Authority
CN
China
Prior art keywords
information
acquiring
center point
contour
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310095644.2A
Other languages
Chinese (zh)
Inventor
贾玉鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yikong Zhijia Technology Co Ltd
Original Assignee
Beijing Yikong Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yikong Zhijia Technology Co Ltd filed Critical Beijing Yikong Zhijia Technology Co Ltd
Priority to CN202310095644.2A priority Critical patent/CN116052106A/en
Publication of CN116052106A publication Critical patent/CN116052106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method and electronic equipment for detecting a falling object, and relates to the technical field of unmanned/automatic driving/unmanned vehicles. According to the method, image frame information of the surrounding environment of a vehicle is obtained in real time, region-of-interest information is selected according to the image frame information, the outline of an object in the region-of-interest information is obtained, the center point of the outline is obtained, the slope of the displacement track of the object is calculated at least based on the center point of the outline in the current frame and the center point of the outline in the previous frame, and when the slope meets a first preset condition, the object is prompted to be a falling object. By adopting the technical scheme, the falling object can be accurately detected, the omission ratio is reduced, and the damage of the object to the vehicle is reduced.

Description

Method for detecting falling object and electronic equipment
Technical Field
The application relates to the technical field of unmanned/automatic driving/unmanned vehicles, in particular to a method for detecting a falling object and electronic equipment.
Background
When the unmanned truck is transporting rocks or ore, if the speed is high and a condition of road bump or small turning radius is encountered, the loaded object may fall out of the vehicle cabin. Some stones with higher hardness and larger volume can cause larger threat to the mine car running at high speed. The damage to the tyre is caused by light weight, so that the service life of the tyre is shortened; the heavy weight causes the vehicle to run away, causing serious accidents.
At present, the mine car can sense the falling object in front by using laser radar and other equipment, and makes deceleration detour and other avoidance actions according to the size of the volume, but the mine car still needs operators to clean in time, otherwise the overall operation efficiency can be influenced.
However, there are many cases where a dropped object is missed, and therefore, there is a need for a target obstacle determining method that can accurately detect a dropped object, reduce the miss rate, and reduce damage to a vehicle caused by the object.
Disclosure of Invention
The application provides a method and electronic equipment for detecting a falling object, which can accurately detect the falling object, reduce the omission ratio and reduce the damage of the object to a vehicle.
In a first aspect, the present application provides a method of detecting a falling object, the method comprising:
acquiring image frame information of the surrounding environment of the vehicle in real time;
selecting region of interest information according to the image frame information;
acquiring the outline of an object in the region of interest information;
acquiring a center point of the contour, and calculating the slope of the displacement track of the object at least based on the center point of the contour in the current frame and the center point of the contour in the previous frame;
and when the slope meets a first preset condition, prompting that the object is a falling object.
In one example, the step of acquiring the contour of the object in the region of interest information includes:
acquiring foreground image information in the region of interest information;
carrying out noise reduction and screening treatment on the foreground image information;
and acquiring the outline of the object based on the processed foreground image information.
In one example, after the acquiring the profile of the object, further includes:
deleting the profile meeting a second preset condition; wherein the second preset condition is the profile being greater than a first threshold and/or the profile being less than a second threshold.
In one example, the step of obtaining the center point of the contour includes: and acquiring the center point of the circumscribed rectangle of the outline.
In one example, the step of prompting the object to be a dropped object includes sending a prompting message to a remote management platform or an adjacent vehicle.
In one example, the vehicle is an autonomous vehicle and the first preset condition is that an absolute value of the slope is less than 0.5.
In a second aspect, the present application provides an apparatus for detecting a falling object, the apparatus comprising:
the first acquisition unit is used for acquiring image frame information of the surrounding environment of the vehicle in real time;
the selecting unit is used for selecting the region of interest information according to the image frame information;
a second acquiring unit configured to acquire a contour of an object in the region-of-interest information;
a third acquisition unit configured to acquire a center point of the contour, and calculate a slope of a displacement trajectory of the object based on at least the center point of the contour in the current frame and the center point of the contour in the previous frame;
and the prompting unit is used for prompting that the object is a falling object when the slope meets a first preset condition.
In one example, the second acquisition unit includes:
the first acquisition module is used for acquiring foreground image information in the region-of-interest information;
the processing module is used for carrying out noise reduction and screening processing on the foreground image information;
and the second acquisition module is used for acquiring the outline of the object based on the processed foreground image information.
In one example, the apparatus includes:
a deleting unit, configured to delete the profile that meets a second preset condition; wherein the second preset condition is the profile being greater than a first threshold and/or the profile being less than a second threshold.
In one example, the third obtaining unit is specifically configured to: and acquiring the center point of the circumscribed rectangle of the outline.
In one example, the prompt unit is specifically configured to send a prompt message to a remote management platform or a neighboring vehicle.
In one example, the vehicle is an autonomous vehicle and the first preset condition is that an absolute value of the slope is less than 0.5.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method as described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored therein computer-executable instructions for performing the method according to the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
According to the method and the electronic device for detecting the falling object, image frame information of the surrounding environment of the vehicle is obtained in real time, then the region of interest information is selected according to the image frame information, the outline of the object is determined from the region of interest information, the center point of the outline is obtained, the slope of the displacement track of the object is determined according to the movement condition of the center point, and when the slope meets the first preset condition, the object is prompted to be the falling object. By adopting the technical scheme, the falling object can be accurately detected, the omission ratio is reduced, and the damage of the object to the vehicle is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of a method for detecting a dropped object according to a first embodiment of the present application;
fig. 2 is a flowchart of a method for detecting a falling object according to a second embodiment of the present application;
fig. 3 is a schematic structural view of a device for detecting a dropped object according to a third embodiment of the present application;
fig. 4 is a schematic structural view of a device for detecting a dropped object according to a fourth embodiment of the present application;
fig. 5 is a block diagram of an electronic device, according to an example embodiment.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for detecting a falling object according to an embodiment of the present application. The first embodiment comprises the following steps:
s101, acquiring image frame information of the surrounding environment of the vehicle in real time.
In this embodiment, the vehicle refers to an unmanned vehicle that transports ore or rock. The vehicle surrounding environment is environmental information captured by a camera device mounted on the vehicle. For example, if the photographing device is in front of the vehicle, the surrounding environment of the vehicle is in front of the vehicle, and at this time, the frame image information includes the road in front of the vehicle. If the camera is mounted in front of the vehicle, the surrounding environment of the vehicle is the left side of the vehicle, and the frame image information may include a left rearview mirror of the vehicle.
S102, selecting the region of interest information according to the image frame information.
In one example, after the image frame information is acquired, region of interest information is selected in the image frame information. Wherein the region of interest information includes a fixed region and a dynamic region. The fixed area is to select a preset position in the image frame information as the region of interest information, and the dynamic area is to dynamically delineate the position in the frame image information as the region of interest information. In this embodiment, the process of selecting the region of interest information may be to store the preset image frame information first. The method comprises the steps of obtaining preset image frame information, wherein the region of interest information in the preset image frame information is a white region, the rest part is a black region, and determining the region of interest information by overlapping the preset image frame information with the obtained image frame information.
S103, acquiring the outline of the object in the region of interest information.
In this embodiment, the region of interest information includes a plurality of objects, and the edges of the objects in the region of interest information are identified and then determined as the contours of the objects.
S104, acquiring the center point of the contour, and calculating the slope of the displacement track of the object at least based on the center point of the contour in the current frame and the center point of the contour in the previous frame.
In this embodiment, the center point of the contour is the position of the center of gravity of the contour. The previous frame is the image frame information of the moment before the current frame, and may be the image frame information of the previous frame of the current frame or the image frame information of the previous frames of the current frame. The slope of the object is determined by the center point of the contour of the current frame and the center point of the contour in the previous frame. Because, the movement of the object can be reflected in the current frame image frame information and the past frame image frame information. Thus, the profile will also move. And if the object is an ore dropped from the vehicle, the ore is moved only by a distance in the vertical direction in the two frames of image frame information, i.e., the two centers should be located on the same vertical straight line.
S105, when the slope meets a first preset condition, prompting that the object is a falling object.
In this embodiment, the first preset condition is a value of a slope of the dropped object, and if the slope meets the first preset condition, the object is prompted to be the dropped object.
According to the method for detecting the falling object, image frame information of the surrounding environment of the vehicle is obtained in real time, then the region of interest information is selected according to the image frame information, the outline of the object is determined from the region of interest information, the center point of the outline is obtained, the slope of the displacement track of the object is determined according to the movement condition of the center point, and when the slope meets the first preset condition, the object is prompted to be the falling object. By adopting the technical scheme, the falling object can be accurately detected, the omission ratio is reduced, and the damage of the object to the vehicle is reduced.
Fig. 2 is a flowchart of a method for detecting a falling object according to a second embodiment of the present application. The second embodiment includes the following steps:
s201, acquiring image frame information of the surrounding environment of the vehicle in real time.
For example, this step may refer to step S101, and will not be described in detail.
S202, selecting the region of interest information according to the image frame information.
For example, this step may refer to step S102, and will not be described in detail.
S203, acquiring foreground image information in the region of interest information.
In this embodiment, the foreground image information refers to image frame information in which irrelevant background information is processed. For example, if the image frame information includes a left mirror of the vehicle, the left mirror may be subjected to background processing, and the obtained foreground image information is image frame information without the left mirror. The advantage of setting like this is that can get rid of the interference object, and then can promote the accuracy that the object was confirmed.
S204, noise reduction and screening processing are carried out on the foreground image information.
In this embodiment, the noise reduction is to remove noise points in the foreground image information, and the algorithm of the noise reduction processing adopted may be to use an elliptical structural element as an kernel to perform an open operation on the foreground image information, so as to obtain the image information after noise reduction. Wherein the elliptical structural element may be the structural element size of 5*5. In this embodiment, the filtering process may further delete the foreground image information whose image quality does not meet the preset requirement.
S205, acquiring the outline of the object based on the processed foreground image information.
In one example, after acquiring the contour of the object, further comprising:
deleting the outline meeting the second preset condition; wherein the second preset condition is a profile greater than the first threshold and/or a profile less than the second threshold.
In this embodiment, the processed foreground image information may include contours of a plurality of objects, but not all the object contours may be used. For example, other traveling vehicles may be included in the processed foreground image information, and at this time, the processed foreground image information may be used as an object, and the traveling vehicles determine the outline of the object. Thus, to avoid this, it is necessary to delete object contours that are greater than a first threshold and/or object contours that are less than a second threshold. Wherein the first threshold is a value greater than the second threshold. The first and second thresholds may be empirically set.
S206, acquiring the center point of the contour, and calculating the slope of the displacement track of the object at least based on the center point of the contour in the current frame and the center point of the contour in the previous frame.
In one example, the step of obtaining a center point of the contour includes: the center point of the circumscribed rectangle of the outline is obtained.
In this embodiment, the shape of the contour is not a regular pattern, and further, it is not easy to determine when the center point of the contour is obtained. Thus, the center point of the circumscribed rectangle of the outline can be obtained.
In this embodiment, the displacement track slope of the object is calculated by determining the coordinates of the center point of the contour of the current frame and the coordinates of the center point of the contour of the previous frame.
S207, when the slope meets a first preset condition, prompting that the object is a falling object.
In one example, the vehicle is an autonomous vehicle and the first preset condition is that the absolute value of the slope is less than 0.5.
In this embodiment, the absolute value of the slope is less than 0.5, and the object is considered to be dropped from the vehicle.
In one example, the step of prompting the object to be a dropped object includes sending a prompt to a remote management platform or an adjacent vehicle.
In this embodiment, the prompt information includes the volume of the dropped object, the alert level of the dropped object, the position information of the vehicle, and the image frame information. The volume of the dropped object may be determined by an area of the dropped object and a volume factor, wherein the volume factor is determined by a model of the vehicle, a model of the recording apparatus mounted on the vehicle, and a position of the recording apparatus mounted on the vehicle. The area of the falling object is determined by the contour. In this embodiment, the alert level is determined by the size of the volume of the dropped object, and the specific alert level may be determined by the user. The larger the volume of the dropped object, the higher the alert level. Further, the prompt message is sent to a remote management platform or an adjacent vehicle, so that a worker can timely process the dropped object.
According to the method for detecting the falling object, the foreground image information in the region of interest information is acquired, noise reduction and screening processing are carried out on the foreground image information, the outline of the object is acquired based on the processed foreground image information, the center point of the outline is acquired, and the slope of the displacement track of the object is calculated at least based on the center point of the outline in the current frame and the center point of the outline in the previous frame. By adopting the technical scheme, the information of the region of interest is marked according to the information such as the position of the camera, and only the information of the region of interest is subjected to object detection, so that the calculation force waste caused by redundant regions can be avoided, and the detection precision can be improved.
Fig. 3 is a schematic structural diagram of a device for detecting a falling object according to a third embodiment of the present application. Specifically, the apparatus 30 of the third embodiment includes:
a first acquiring unit 301 configured to acquire image frame information of a surrounding environment of a vehicle in real time;
a selecting unit 302, configured to select the region of interest information according to the image frame information;
a second acquisition unit 303 for acquiring a contour of an object in the region-of-interest information;
a third obtaining unit 304, configured to obtain a center point of the contour, and calculate a slope of a displacement trajectory of the object based on at least the center point of the contour in the current frame and the center point of the contour in the previous frame;
and a prompting unit 305, configured to prompt the object to be a falling object when the slope satisfies a first preset condition.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Fig. 4 is a schematic structural diagram of a device for detecting a falling object according to a fourth embodiment of the present application. Specifically, the apparatus 40 of the fourth embodiment includes:
a first acquiring unit 401 for acquiring image frame information of a surrounding environment of a vehicle in real time;
a selecting unit 402, configured to select the region of interest information according to the image frame information;
a second acquiring unit 403 configured to acquire a contour of an object in the region-of-interest information;
a third obtaining unit 404, configured to obtain a center point of the contour, and calculate a slope of a displacement trajectory of the object based on at least the center point of the contour in the current frame and the center point of the contour in the previous frame;
and a prompting unit 405, configured to prompt the object to be a falling object when the slope meets a first preset condition.
In one example, the second acquisition unit 403 includes:
a first acquiring module 4031, configured to acquire foreground image information in the region of interest information;
a processing module 4032, configured to perform noise reduction and screening processing on the foreground image information;
a second acquiring module 4033, configured to acquire a contour of the object based on the processed foreground image information.
In one example, the apparatus 40 includes:
a deleting unit 406, configured to delete the contour that meets the second preset condition; wherein the second preset condition is a profile greater than the first threshold and/or a profile less than the second threshold.
In one example, the third obtaining unit 404 is specifically configured to: the center point of the circumscribed rectangle of the outline is obtained.
In one example, the prompt unit 405 is specifically configured to send a prompt message to a remote management platform or a neighboring vehicle.
In one example, the vehicle is an autonomous vehicle and the first preset condition is that the absolute value of the slope is less than 0.5.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Fig. 5 is a block diagram of an electronic device, which may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like, in accordance with an exemplary embodiment.
The apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the apparatus 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the apparatus 500. Examples of such data include instructions for any application or method operating on the apparatus 500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 500 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the apparatus 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the device 500, the sensor assembly 514 may also detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, the orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of apparatus 500 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform a method of detecting a dropped object of the electronic device described above.
The application also discloses a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the present embodiment.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or electronic device.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data electronic device), or that includes a middleware component (e.g., an application electronic device), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and an electronic device. The client and the electronic device are generally remote from each other and typically interact through a communication network. The relationship of client and electronic devices arises by virtue of computer programs running on the respective computers and having a client-electronic device relationship to each other. The electronic equipment can be cloud electronic equipment, also called cloud computing electronic equipment or cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service (Virtual Private Server or VPS for short) are overcome. The electronic device may also be an electronic device of a distributed system or an electronic device that incorporates a blockchain. It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of detecting a falling object, the method comprising:
acquiring image frame information of the surrounding environment of the vehicle in real time;
selecting region of interest information according to the image frame information;
acquiring the outline of an object in the region of interest information;
acquiring a center point of the contour, and calculating the slope of the displacement track of the object at least based on the center point of the contour in the current frame and the center point of the contour in the previous frame;
and when the slope meets a first preset condition, prompting that the object is a falling object.
2. The method according to claim 1, wherein the step of acquiring the contour of the object in the region of interest information comprises:
acquiring foreground image information in the region of interest information;
carrying out noise reduction and screening treatment on the foreground image information;
and acquiring the outline of the object based on the processed foreground image information.
3. The method of claim 2, further comprising, after the acquiring the contour of the object:
deleting the profile meeting a second preset condition; wherein the second preset condition is the profile being greater than a first threshold and/or the profile being less than a second threshold.
4. The method of claim 2, wherein the step of obtaining a center point of the profile comprises: and acquiring the center point of the circumscribed rectangle of the outline.
5. The method of claim 1, wherein the step of prompting the object to be a dropped object comprises sending a prompting message to a remote management platform or an adjacent vehicle.
6. The method of claim 1, wherein the vehicle is an autonomous vehicle and the first predetermined condition is that an absolute value of the slope is less than 0.5.
7. An apparatus for detecting a falling object, the apparatus comprising:
the first acquisition unit is used for acquiring image frame information of the surrounding environment of the vehicle in real time;
the selecting unit is used for selecting the region of interest information according to the image frame information;
a second acquiring unit configured to acquire a contour of an object in the region-of-interest information;
a third acquisition unit configured to acquire a center point of the contour, and calculate a slope of a displacement trajectory of the object based on at least the center point of the contour in the current frame and the center point of the contour in the previous frame;
and the prompting unit is used for prompting that the object is a falling object when the slope meets a first preset condition.
8. The apparatus of claim 7, wherein the second acquisition unit comprises:
the first acquisition module is used for acquiring foreground image information in the region-of-interest information;
the processing module is used for carrying out noise reduction and screening processing on the foreground image information;
and the second acquisition module is used for acquiring the outline of the object based on the processed foreground image information.
9. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-6.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-6.
CN202310095644.2A 2023-01-20 2023-01-20 Method for detecting falling object and electronic equipment Pending CN116052106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310095644.2A CN116052106A (en) 2023-01-20 2023-01-20 Method for detecting falling object and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310095644.2A CN116052106A (en) 2023-01-20 2023-01-20 Method for detecting falling object and electronic equipment

Publications (1)

Publication Number Publication Date
CN116052106A true CN116052106A (en) 2023-05-02

Family

ID=86125437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310095644.2A Pending CN116052106A (en) 2023-01-20 2023-01-20 Method for detecting falling object and electronic equipment

Country Status (1)

Country Link
CN (1) CN116052106A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824550A (en) * 2023-08-29 2023-09-29 深圳魔视智能科技有限公司 Camera image processing method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824550A (en) * 2023-08-29 2023-09-29 深圳魔视智能科技有限公司 Camera image processing method, device, computer equipment and storage medium
CN116824550B (en) * 2023-08-29 2024-01-30 深圳魔视智能科技有限公司 Camera image processing method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108549880B (en) Collision control method and device, electronic equipment and storage medium
CN108596116B (en) Distance measuring method, intelligent control method and device, electronic equipment and storage medium
RU2656933C2 (en) Method and device for early warning during meeting at curves
CN104537860B (en) Driving safety prompt method and device
US11301726B2 (en) Anchor determination method and apparatus, electronic device, and storage medium
CN108583571A (en) Collision control method and device, electronic equipment and storage medium
EP3163498A2 (en) Alarming method and device
CN107480665B (en) Character detection method and device and computer readable storage medium
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
CN110197518B (en) Curve Thinning Method and Device
CN112669583A (en) Alarm threshold value adjusting method and device, electronic equipment and storage medium
CN108171225B (en) Lane detection method, device, terminal and storage medium
CN109218598B (en) Camera switching method and device and unmanned aerial vehicle
CN116052106A (en) Method for detecting falling object and electronic equipment
CN113442950B (en) Automatic driving control method, device and equipment based on multiple vehicles
CN109919126B (en) Method and device for detecting moving object and storage medium
CN114252886A (en) Obstacle contour determination method, device, equipment and storage medium
CN115743098B (en) Parking method, device, storage medium, electronic equipment and vehicle
CN113344899B (en) Mining working condition detection method and device, storage medium and electronic equipment
CN116030551B (en) Method, device, equipment and storage medium for testing vehicle autopilot software
CN116540252B (en) Laser radar-based speed determination method, device, equipment and storage medium
CN117163069A (en) Speed planning method, device, equipment and storage medium based on automatic driving
CN117831224B (en) Fall alarm method, device, equipment and medium based on millimeter radar
CN115082473B (en) Dirt detection method and device and electronic equipment
CN113450298B (en) Multi-sensor-based view map processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination