CN113706608A - Pose detection device and method for target object in predetermined area and electronic equipment - Google Patents

Pose detection device and method for target object in predetermined area and electronic equipment Download PDF

Info

Publication number
CN113706608A
CN113706608A CN202110958929.5A CN202110958929A CN113706608A CN 113706608 A CN113706608 A CN 113706608A CN 202110958929 A CN202110958929 A CN 202110958929A CN 113706608 A CN113706608 A CN 113706608A
Authority
CN
China
Prior art keywords
predetermined area
boundary line
virtual frame
target object
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110958929.5A
Other languages
Chinese (zh)
Other versions
CN113706608B (en
Inventor
马志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunxiang Shanghai Intelligent Technology Co ltd
Original Assignee
Yunxiang Shanghai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunxiang Shanghai Intelligent Technology Co ltd filed Critical Yunxiang Shanghai Intelligent Technology Co ltd
Priority to CN202110958929.5A priority Critical patent/CN113706608B/en
Publication of CN113706608A publication Critical patent/CN113706608A/en
Application granted granted Critical
Publication of CN113706608B publication Critical patent/CN113706608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a device and a method for detecting the pose of a target object in a preset area and electronic equipment, wherein the method for detecting the pose of the target object comprises the following steps: acquiring an image in a preset area in real time; determining the relative position between a virtual frame framing the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line; and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.

Description

Pose detection device and method for target object in predetermined area and electronic equipment
Technical Field
The invention relates to the technical field of target object detection, in particular to a pose detection device and method for a target object in a preset area and electronic equipment.
Background
At present, roadside smart parking is an important ring of a smart city, and the detection of parking is mainly based on detection means such as video, geomagnetism and the like. The judgment of the parking behavior of the vehicle is mainly based on the geomagnetism buried in the parking space or a camera installed on the roadside. Although the geomagnetic detection has high accuracy, there are many disadvantages to the geomagnetic detection.
The principle of earth magnetism detection is whether there is a metal object to the detection parking stall that earth magnetic field changes. But it cannot be determined that it must be a motor vehicle, i.e. the properties of the object cannot be determined. Secondly, adopt this method to need punch on every parking stall, install the wireless earth magnetism detector of formula of buryying, just so need destroy the road surface that every parking stall corresponds. After a wireless geomagnetic detector is buried underground at each parking space, when a vehicle stops at the parking space, geomagnetic detection earth magnetic field deflection signals are sent to a roadside relay receiver after being processed, and a relay sends equipment such as a server background again, so that the vehicle is detected.
Besides, the roadside vehicle detected and managed by geomagnetism has the following defects: the need to regularly change batteries, the replacement of batteries creates high pollution, the need to erect relay receivers, high costs, the radio is easily disturbed and the maintenance of the road surface is easily damaged or needs to be taken out.
For the detection and management of roadside parking in the prior art by means of video detection, the current video detection can only detect 2 parking spaces. Due to the problem of the visual angle of the camera, the more backward the vehicle is overlapped, the more difficulty in judging the behavior of the vehicle is. Therefore, each camera usually judges 2-3 parking spaces, and a plurality of cameras are integrated together to judge. In addition, the detection and management of roadside parking by means of video detection requires road excavation for subsequent camera installation.
Disclosure of Invention
An advantage of the present invention is to provide an apparatus, a method, and an electronic device for detecting poses of objects in a predetermined area, in which poses of a plurality of objects with respect to the predetermined area can be detected by a method of detecting positions of objects in the predetermined area. Preferably, the object is a vehicle.
An advantage of the present invention is to provide a posture detecting apparatus, a posture detecting method, and an electronic device for an object in a predetermined area, in which there is no need to dig a plurality of holes for installing geomagnetism in the predetermined area when detecting a posture of the object with respect to the predetermined area by a position detecting method for the object in the predetermined area.
Another advantage of the present invention is to provide an apparatus, a method, and an electronic device for detecting poses of objects in a predetermined area, wherein when poses of a plurality of objects with respect to the predetermined area are detected by a method for detecting positions of the objects in the predetermined area, it is possible to avoid an influence of position overlapping of the plurality of objects on a detection result.
Another advantage of the present invention is to provide an apparatus, a method, and an electronic device for detecting poses of objects in a predetermined area, in which when poses of a plurality of the objects are detected by the method for detecting poses of the objects in the predetermined area, it is possible to determine whether the objects are to be moved out of the predetermined area or moved into the predetermined area.
Another advantage of the present invention is to provide an apparatus, a method, and an electronic device for detecting a pose of an object in a predetermined area, in which a plurality of the objects can be determined by a pose detection method of the object in the predetermined area
To achieve at least one of the above advantages, the present invention provides a method for detecting a pose of a target object in a predetermined area, the method comprising:
acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framing the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.
According to an embodiment of the present invention, determining a relative position between a virtual frame framing the object and a boundary line of the predetermined area in each frame of the image through an image recognition algorithm includes:
determining parameters of the virtual frame through a Yolo algorithm;
and determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line.
According to an embodiment of the present invention, determining a relative position between the virtual frame of the target object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line includes:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the preset area;
wherein the determining the pose of the object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the predetermined area.
According to an embodiment of the present invention, it is determined that the target object is far from the predetermined region if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located within the predetermined region is gradually decreased as a whole, it is determined that the target object is about to enter the predetermined region if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located within the predetermined region is gradually decreased as a whole, and it is determined that the target object is always kept within the predetermined region if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located within the predetermined region is not changed as a whole.
According to an embodiment of the present invention, determining a relative position between the virtual frame of the target object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line includes:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the preset area;
the method for judging the pose of the target object outside the preset area according to the relative position between the virtual frame and the boundary line in each frame of image comprises the following steps:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the predetermined area.
According to an embodiment of the present invention, if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located outside the predetermined region is gradually decreased as a whole, it is determined that the target object is about to enter the predetermined region, if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located outside the predetermined region is gradually decreased as a whole, it is determined that the target object is about to be away from the predetermined region, and if a length of a line segment where a bottom side of the virtual frame intersecting the boundary line is located outside the predetermined region is not changed as a whole, it is determined that the target object is always kept in the predetermined region.
According to an embodiment of the present invention, if the virtual frame and the boundary line do not intersect and are outside the boundary line, it is determined that the target object does not enter the predetermined area.
According to an aspect of the present invention, there is provided a charging method for a target staying in a predetermined area, including:
the pose detection method of the target object in any one of the predetermined areas; and
and determining the cost required for the target object to stay in the preset area according to the charging rule of the time period, wherein the time period when the position of the target object in the preset area is kept unchanged or the position of the target object in the preset area is used as the charging time period.
According to an aspect of the present invention, there is provided a pose detection apparatus of a target object within a predetermined area, the apparatus including:
an acquisition module for acquiring a plurality of images of the predetermined area;
a processing module, wherein the processing module is configured to be communicatively connected to the acquiring module and configured to determine a relative position between a virtual frame framing the object and a boundary line of the predetermined area in each frame of image through an image recognition algorithm, wherein a relative positional relationship between the virtual frame and the boundary line is determined by a relative position between a bottom side of the virtual frame and the boundary line;
an output module, wherein the output module is communicatively connected to the processing module, wherein the output module is configured to determine the pose of the object within the predetermined area based on the relative position between the virtual frame and the boundary line in each frame of image.
According to another aspect of the present invention, there is provided an electronic apparatus comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the method of any of the above.
Drawings
Fig. 1 shows a flowchart of a pose detection method of a target object in a predetermined area according to the present invention.
Fig. 2 is a schematic diagram showing detection of a vehicle relative to a parking space by the pose detection method of the target object in the predetermined area.
Fig. 3A and 3B are schematic diagrams respectively showing two states of detecting a vehicle relative to a parking space by the pose detection method of the target object in the predetermined area according to the present invention.
Fig. 4A and 4B are schematic diagrams illustrating detection of two states of a vehicle relative to a parking space by the pose detection method of the target object in the predetermined area according to another embodiment of the present invention.
Fig. 5 is a block diagram showing the configuration of the pose detection apparatus of the target object in the predetermined area according to the present invention.
Fig. 6 shows a block diagram of the electronic device according to the invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
Referring to fig. 1 to 6, a method for detecting a pose of an object in a predetermined area according to a preferred embodiment of the present invention will be described in detail below.
The position detection method of the target object in the predetermined area can detect the poses of a plurality of target objects, particularly vehicles, with respect to the predetermined area. That is to say, by the pose detection method of the target object in the predetermined area, it can be determined that the vehicle is about to enter the predetermined area and stay in the predetermined area, and the influence of the overlapping of the positions of the plurality of target objects on the detection result can be effectively avoided.
Exemplary method for detecting position of target object in predetermined area
The pose detection method of the target object in the preset area comprises the following steps:
s101, acquiring an image in a preset area in real time;
in a preferred embodiment, the image within the predetermined area comprises an image of all objects staying within the predetermined area, and the predetermined area is formed by at least one border line 300 and/or a border formed by real objects in the real scene corresponding to the image.
Especially when the pose detection method of the target object in the predetermined area is used to detect that the vehicle is parked, and the predetermined area is composed of a plurality of parking spaces. At this time, the boundary line 300 corresponds to the boundary line 300 on the side of the parking space apart from the road tooth sub-boundary in the real scene, as shown in fig. 1 and 2.
In order to enable a person skilled in the art to understand the present invention, at least one embodiment of the present invention is described by taking as an example that the predetermined area is implemented as an area formed corresponding to a plurality of parking spaces on a roadside, wherein the boundary line 300 of the predetermined area is described by taking as an example that the boundary line is located on a side away from a boundary of a curb of a road.
Preferably, the predetermined area further comprises a plurality of sub-areas, and each sub-area corresponds to a real scene for stopping the target object with a predetermined size. Preferably, the real scene corresponding to each sub-area is used to stop the vehicle. In other words, each of said sub-areas corresponds to a parking space, as shown in fig. 2.
Further, the image within the predetermined area may be obtained by at least one image pickup device disposed in the vicinity of the predetermined area. Preferably, the image within the predetermined area may be obtained by one image capturing device disposed in the vicinity of the predetermined area.
The method for detecting the pose of the target object in the preset area further comprises the following steps:
s102, determining the relative position between a virtual frame 400 framing the target object and a boundary line 300 of the predetermined area in each frame of image through an image recognition algorithm, wherein the relative position relationship between the virtual frame 400 and the boundary line 300 is determined through the relative position between the bottom edge of the virtual frame 400 and the boundary line 300.
It is worth mentioning that, through an image recognition algorithm, a virtual frame 400 framed into the object may be formed on the object at least partially overlapping with the predetermined area, and parameters of the virtual frame 400 may be determined. Preferably, the virtual box 400 and the parameters of the virtual box 400 may be determined by a Yolo algorithm. Those skilled in the art will appreciate that other image algorithms may be used to determine the virtual box 400 and the parameters of the virtual box 400.
After the parameters of the virtual frame 400 are determined, the pose of the target object can be further determined according to the relative position between the virtual frame 400 and the boundary line 300.
In one embodiment, determining the relative position between the virtual frame 400 of the object and the boundary line 300 of the predetermined area according to the parameters of the virtual frame 400 and the parameters of the boundary line 300 comprises:
and S1021, according to the length change trend of the line segment where the bottom edge on the virtual frame 400 intersected with the boundary line 300 is located in the preset area.
Specifically, if the length of the line segment of the bottom side of the virtual frame 400 intersecting the boundary line 300, which is located in the predetermined area, is gradually decreased as a whole, it is determined that the target object is away from the predetermined area, if the length of the line segment of the bottom side of the virtual frame 400 intersecting the boundary line 300, which is located in the predetermined area, is gradually decreased as a whole, it is determined that the target object is about to enter the predetermined area, and if the length of the line segment of the bottom side of the virtual frame 400 intersecting the boundary line 300, which is located in the predetermined area, is not changed as a whole, it is determined that the target object is always kept in the predetermined area.
Referring to fig. 3A and 3B, a vehicle as the target object stays within the predetermined area, and a plurality of vehicles stops in a sub-area of the predetermined area in a predetermined manner.
It is understood that, when the image of the predetermined area is obtained by a single camera, the virtual frame 400 framed in the vehicle may not be affected by the vehicle overlapping portion although the positions of the vehicles are partially overlapped. When one of the vehicles enters the sub-area, the bottom side of the virtual frame 400 framed in the vehicle can intersect the boundary line 300.
In addition, by analyzing the length variation trend of the line segment where the bottom line of the virtual frame 400 intersected with the boundary line 300 is located in the predetermined region, the pose of the corresponding vehicle relative to the sub-region can be judged.
For example, when the length of a line segment of the bottom line of the virtual frame 400, which intersects with the boundary line 300, is located in the predetermined area and is drawn into the vehicle, is gradually reduced as a whole until the line segment is stable, the vehicle relatively tends to move away from the predetermined area. In other words, the vehicle is driven out of the parking space at this time.
Conversely, the length of the line segment of the bottom line of the virtual frame 400, which intersects with the boundary line 300, within the predetermined area when the vehicle is framed in the vehicle is gradually increased as a whole until the line segment is stabilized to a non-zero value, and then the vehicle is relatively driven into the predetermined area. In other words, the vehicle is driven into the parking space at this time.
And when the length of a line segment of the bottom line of the virtual frame 400 of the vehicle, which is intersected with the boundary line 300, in the predetermined area is 0, the vehicle does not enter the predetermined area, that is, does not stop in the parking space corresponding to the sub-area.
Referring to fig. 4A and 4B, in another embodiment, determining the relative position between the virtual frame 400 of the object and the boundary line 300 of the predetermined area according to the parameters of the virtual frame 400 and the parameters of the boundary line 300 includes:
s1022, a length variation trend of the line segment where the bottom line on the virtual frame 400 intersected with the boundary line 300 is located outside the predetermined region is determined.
The method for detecting the pose of the target object in the preset area further comprises the following steps:
s103, determining the pose of the target object in the preset area according to the relative position between the virtual frame 400 and the boundary line 300 in each frame of image.
Specifically, if the length of the line segment where the bottom edge of the virtual frame 400 intersecting the boundary line 300 is located outside the predetermined region is gradually decreased as a whole, it is determined that the target object is to enter the predetermined region, if the length of the line segment where the bottom edge of the virtual frame 400 intersecting the boundary line 300 is located outside the predetermined region is gradually decreased as a whole, it is determined that the target object is to be away from the predetermined region, and if the length of the line segment where the bottom edge of the virtual frame 400 intersecting the boundary line 300 is located outside the predetermined region is not changed as a whole, it is determined that the target object is always kept in the predetermined region.
Referring to fig. 3, a vehicle as the target stops in the predetermined area, and a plurality of vehicles stop in a sub-area of the predetermined area in a predetermined manner.
It is understood that, when the image of the predetermined area is obtained by a single camera, the virtual frame 400 framed in the vehicle may not be affected by the vehicle overlapping portion although the positions of the vehicles are partially overlapped. When one of the vehicles enters the sub-area, the bottom side of the virtual frame 400 framed in the vehicle can intersect the boundary line 300.
In addition, by analyzing the length variation trend of the line segment where the bottom line on the virtual frame 400 intersected with the boundary line 300 is located outside the predetermined area, the pose of the corresponding vehicle relative to the sub-area can be judged.
For example, when the virtual frame 400 is drawn into the vehicle and the length of the line segment of the bottom line of the virtual frame 400 intersecting the boundary line 300, which is located outside the predetermined area, is gradually reduced as a whole until the line segment is stable, the vehicle is about to move into the predetermined area. In other words, the vehicle is driven into the parking space at this time.
Conversely, the length of the line segment of the bottom line of the virtual frame 400, which intersects with the boundary line 300, which is framed in the vehicle, located outside the predetermined region gradually increases as a whole, and when the length stabilizes to a non-zero value, the vehicle is to relatively leave the predetermined region. In other words, the vehicle is driven out of the parking space at this time.
And when the length of a line segment of the bottom line of the virtual frame 400 of the vehicle, which is intersected with the boundary line 300, is 0, the vehicle does not enter the predetermined area, that is, does not stop in the parking space corresponding to the sub-area.
It can be understood that, since the size of the virtual frame 400 is reduced along with the reduction of the volume of the vehicle, and the virtual frame 400 is always framed in the vehicle, by analyzing the features of the bottom side of the virtual frame 400, the relative positional relationship of the vehicle with respect to the sub-area of the predetermined area, that is, the parking space, can be determined on the premise that the position overlapping of the plurality of objects is prevented from affecting the detection result.
According to another aspect of the present invention, the present invention provides a charging method, wherein the charging method is used for calculating the charge of the vehicle staying in a sub-area of the predetermined area, i.e. a parking space.
Specifically, the charging method includes: a pose detection method of the target object in the preset area; and
and determining the cost required for the target object to stay in the preset area according to the time period in which the position and posture of the target object in the preset area are kept unchanged as a charging time period and the charging rule of the time period.
The charging rule can be customized according to users, such as: the charging rules may specify that no charging is to be carried out for half an hour while the pose remains unchanged in said predetermined area, and that 30 dollars are charged for more than one hour after more than half an hour. It should be noted that the charging rule may be customized according to the user, and this embodiment is not limited.
Exemplary apparatus for detecting position of object within predetermined area
As shown in fig. 5, an embodiment of the present application provides an apparatus 100 for detecting a position of an object in a predetermined area, where the apparatus 100 includes:
an obtaining module 10, configured to obtain a plurality of images of the predetermined area;
a processing module 20, wherein the processing module 20 is configured to be communicatively connected to the obtaining module 10, and is configured to determine, by an image recognition algorithm, a relative position between a virtual frame framing the object and a boundary line of the predetermined area in each frame of image, wherein a relative positional relationship between the virtual frame and the boundary line is determined by a relative position between a bottom side of the virtual frame and the boundary line;
an output module 30, wherein the output module 30 is communicatively connected to the processing module 30, wherein the output module 30 is configured to determine the pose of the object within the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image.
Preferably, the processing module 20 is further configured to determine a parameter of the virtual frame by a Yolo algorithm and determine a relative position between the virtual frame of the object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line.
Preferably, the processing module 20 is further configured to set a length variation trend according to a line segment where a bottom line on the virtual frame intersecting the boundary line is located in the predetermined area. Accordingly, the output module 30 is configured to output the position variation trend of the object relative to the predetermined area according to the length variation trend of the line segment where the bottom line on the virtual frame intersecting the boundary line is located in the predetermined area.
Specifically, if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined area is gradually decreased as a whole, the output module 30 outputs the result that the target object is away from the predetermined area, if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined area is gradually decreased as a whole, the output module 30 outputs the result that the target object is about to enter the predetermined area, and if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined area is not changed as a whole, the output module 30 outputs the result that the target object is always kept within the predetermined area.
In another embodiment, the processing module 20 is configured to set a length variation trend according to a line segment where a bottom edge on the virtual frame intersecting the boundary line is located outside the predetermined area; wherein the output module 30 is configured to determine a position variation trend of the object relative to the predetermined area according to a length variation trend of a line segment where a bottom line on the virtual frame intersecting the boundary line is located outside the predetermined area.
Specifically, if the length of the line segment where the bottom edge of the virtual frame intersecting the boundary line is located outside the predetermined region is gradually decreased as a whole, the output module 30 outputs that the target object is to enter the predetermined region, if the length of the line segment where the bottom edge of the virtual frame intersecting the boundary line is located outside the predetermined region is gradually decreased as a whole, the output module 30 outputs that the target object is to be away from the predetermined region, and if the length of the line segment where the bottom edge of the virtual frame intersecting the boundary line is located outside the predetermined region is not changed as a whole, the output module 30 outputs that the target object is always kept in the predetermined region.
Exemplary electronic device
Fig. 6 is a schematic structural diagram of an embodiment of an electronic device of the present application, and as shown in fig. 6, the electronic device may include: one or more processors; a memory; and one or more computer programs.
The electronic equipment can be a computer, a server, a mobile terminal (mobile phone), a cash register, a computer, an Intelligent screen, an unmanned aerial Vehicle, an Intelligent Internet Vehicle (ICV), an Intelligent car (smart/interactive car) or a Vehicle-mounted device.
Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the steps of:
acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framing the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.
Determining the relative position between a virtual frame framing the target object and a boundary line of the predetermined area in each frame of image through an image recognition algorithm, comprising:
determining parameters of the virtual frame through a Yolo algorithm;
and determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line.
Determining the relative position between the virtual frame of the target object and the boundary line of the predetermined area according to the parameters of the virtual frame and the parameters of the boundary line, and the method comprises the following steps:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the preset area;
wherein the determining the pose of the object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the predetermined area.
Specifically, it is determined that the target object is far from the predetermined region if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined region is gradually decreased as a whole, it is determined that the target object is about to enter the predetermined region if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined region is gradually decreased as a whole, and it is determined that the target object is always kept within the predetermined region if the length of the line segment of the bottom side of the virtual frame intersecting the boundary line within the predetermined region is not changed as a whole.
In another embodiment, determining the relative position between the virtual frame of the object and the boundary line of the predetermined area according to the parameters of the virtual frame and the parameters of the boundary line comprises:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the preset area;
the method for judging the pose of the target object outside the preset area according to the relative position between the virtual frame and the boundary line in each frame of image comprises the following steps:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the predetermined area.
Specifically, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined region, is gradually decreased as a whole, it is determined that the target object is about to enter the predetermined region, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined region, is gradually decreased as a whole, it is determined that the target object is about to be away from the predetermined region, and if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined region, is not changed as a whole, it is determined that the target object is always kept in the predetermined region.
Preferably, if the virtual frame and the boundary line do not intersect, it is determined that the target object does not enter the predetermined area.
The one or more computer programs stored in the memory, the one or more computer programs including instructions which, when executed by the apparatus, cause the apparatus to perform a charging method, the charging method comprising:
the pose detection method of the target object in any one of the predetermined areas; and
and determining the cost required for the target object to stay in the preset area according to the charging rule of the time period.
The electronic device shown in fig. 6 may be a terminal device or a server, or may be a circuit device built in the terminal device or the server. The apparatus may be used to perform the functions/steps of the image recognition method provided by the embodiment of fig. 1 of the present application.
As shown in fig. 6, the electronic device 900 includes a processor 910 and a memory 920. Wherein, the processor 910 and the memory 920 can communicate with each other through the internal connection path to transmit control and/or data signals, the memory 920 is used for storing computer programs, and the processor 910 is used for calling and running the computer programs from the memory 920.
The memory 920 may be a read-only memory (ROM), other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM), or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disc storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
The processor 910 and the memory 920 may be combined into a processing device, and more generally, independent components, and the processor 910 is configured to execute the program codes stored in the memory 920 to realize the functions. In particular implementations, the memory 920 may be integrated with the processor 910 or may be separate from the processor 910.
It should be appreciated that the electronic device 900 shown in fig. 6 is capable of implementing the processes of the methods provided by the embodiments shown in fig. 1 of the present application. The operations and/or functions of the respective modules in the electronic device 900 are respectively for implementing the corresponding flows in the above-described method embodiments. Reference may be made specifically to the description of the embodiment of the method illustrated in fig. 1 of the present application, and a detailed description is appropriately omitted herein to avoid redundancy.
In addition, in order to further improve the functions of the electronic apparatus 900, the electronic apparatus 900 may further include one or more of a camera 930, a power supply 940, an input unit 950, and the like.
Optionally, the power supply 950 is used to provide power to various devices or circuits in the electronic device.
It should be understood that the processor 910 in the electronic device 900 shown in fig. 6 may be a system on chip SOC, and the processor 910 may include a Central Processing Unit (CPU), and may further include other types of processors, such as: an image Processing Unit (hereinafter, referred to as GPU), and the like.
In summary, various parts of the processors or processing units within the processor 910 may cooperate to implement the foregoing method flows, and corresponding software programs for the various parts of the processors or processing units may be stored in the memory 920.
The application also provides an electronic device, the device includes a storage medium and a central processing unit, the storage medium may be a non-volatile storage medium, a computer executable program is stored in the storage medium, and the central processing unit is connected with the non-volatile storage medium and executes the computer executable program to implement the method provided by the embodiment shown in fig. 1 of the application.
In the above embodiments, the processors may include, for example, a CPU, a DSP, a microcontroller, or a digital Signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU), and an Image Signal Processing (ISP), and the processors may further include necessary hardware accelerators or logic Processing hardware circuits, such as an ASIC, or one or more integrated circuits for controlling the execution of the program according to the technical solution of the present application. Further, the processor may have the functionality to operate one or more software programs, which may be stored in the storage medium.
Embodiments of the present application further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is enabled to execute the method provided by the embodiment shown in fig. 1 of the present application.
Embodiments of the present application also provide a computer program product, which includes a computer program, when the computer program runs on a computer, causing the computer to execute the method provided by the embodiment shown in fig. 1 of the present application.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The advantages of the present invention have been fully and suitably realized. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (10)

1. A method for detecting the pose of a target object in a predetermined area is characterized by comprising the following steps:
acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framing the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.
2. The method according to claim 1, wherein determining, by an image recognition algorithm, a relative position between a virtual frame framing the object in each frame of the image and a boundary line of the predetermined area comprises:
determining parameters of the virtual frame through a Yolo algorithm;
and determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line.
3. The pose detection method of the target object within the predetermined area according to claim 2, wherein determining the relative position between the virtual frame of the target object and the boundary line of the predetermined area based on the parameter of the virtual frame and the parameter of the boundary line comprises:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the preset area;
wherein the determining the pose of the object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the predetermined area.
4. A pose detection method of an object within a predetermined area according to claim 3, wherein it is determined that the object is far from the predetermined area if a length of a line segment where a bottom side on the virtual frame intersecting the boundary line is located within the predetermined area is gradually decreased as a whole, it is determined that the object is about to enter the predetermined area if a length of a line segment where a bottom side on the virtual frame intersecting the boundary line is located within the predetermined area is gradually decreased as a whole, and it is determined that the object is always kept within the predetermined area if a length of a line segment where a bottom side on the virtual frame intersecting the boundary line is located within the predetermined area is not changed as a whole.
5. The pose detection method of the target object within the predetermined area according to claim 2, wherein determining the relative position between the virtual frame of the target object and the boundary line of the predetermined area based on the parameter of the virtual frame and the parameter of the boundary line comprises:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the preset area;
the method for judging the pose of the target object outside the preset area according to the relative position between the virtual frame and the boundary line in each frame of image comprises the following steps:
and determining the position change trend of the target object relative to the predetermined area according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located outside the predetermined area.
6. The method according to claim 3, wherein it is determined that the target object is about to enter the predetermined area if a length of a line segment where a bottom edge of the virtual frame that intersects the boundary line is located outside the predetermined area is gradually decreased as a whole, it is determined that the target object is about to be away from the predetermined area if a length of a line segment where a bottom edge of the virtual frame that intersects the boundary line is located outside the predetermined area is gradually decreased as a whole, and it is determined that the target object is always kept in the predetermined area if a length of a line segment where a bottom edge of the virtual frame that intersects the boundary line is located outside the predetermined area is not changed as a whole.
7. The method according to claim 1, wherein it is determined that the object does not enter the predetermined area if the virtual frame does not intersect the boundary line and is outside the boundary line.
8. A charging method for charging a predetermined area where a target object stays, comprising:
a pose detection method of a target object in the predetermined area according to any one of claims 1 to 7; and
and determining the cost required for the target object to stay in the preset area according to the charging rule of the time period, wherein the time period when the position of the target object in the preset area is kept unchanged or the position of the target object in the preset area is used as the charging time period.
9. A pose detection apparatus of a target object in a predetermined area, the apparatus comprising:
an acquisition module for acquiring a plurality of images of the predetermined area;
a processing module, wherein the processing module is configured to be communicatively connected to the acquiring module and configured to determine a relative position between a virtual frame framing the object and a boundary line of the predetermined area in each frame of image through an image recognition algorithm, wherein a relative positional relationship between the virtual frame and the boundary line is determined by a relative position between a bottom side of the virtual frame and the boundary line;
an output module, wherein the output module is communicatively connected to the processing module, wherein the output module is configured to determine the pose of the object within the predetermined area based on the relative position between the virtual frame and the boundary line in each frame of image.
10. An electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the apparatus, cause the apparatus to perform the method of any of claims 1 to 8.
CN202110958929.5A 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment Active CN113706608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110958929.5A CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110958929.5A CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Publications (2)

Publication Number Publication Date
CN113706608A true CN113706608A (en) 2021-11-26
CN113706608B CN113706608B (en) 2023-11-28

Family

ID=78653613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110958929.5A Active CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Country Status (1)

Country Link
CN (1) CN113706608B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612194A (en) * 2023-07-20 2023-08-18 天津所托瑞安汽车科技有限公司 Position relation determining method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013210942A (en) * 2012-03-30 2013-10-10 Fujitsu Ten Ltd Detection device and detection method
US20150124093A1 (en) * 2013-11-04 2015-05-07 Xerox Corporation Method for object size calibration to aid vehicle detection for video-based on-street parking technology
CN108052260A (en) * 2017-11-29 2018-05-18 努比亚技术有限公司 Mobile terminal operation response method, mobile terminal and readable storage medium storing program for executing
FR3080075A1 (en) * 2018-04-13 2019-10-18 Renault S.A.S. METHOD AND SYSTEM FOR ASSISTING THE DRIVING OF A VEHICLE
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111784857A (en) * 2020-06-22 2020-10-16 浙江大华技术股份有限公司 Parking space management method and device and computer storage medium
CN111957040A (en) * 2020-09-07 2020-11-20 网易(杭州)网络有限公司 Method and device for detecting shielding position, processor and electronic device
CN112053397A (en) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113239912A (en) * 2021-07-13 2021-08-10 天津所托瑞安汽车科技有限公司 Method, device and storage medium for determining BSD image effective area

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013210942A (en) * 2012-03-30 2013-10-10 Fujitsu Ten Ltd Detection device and detection method
US20150124093A1 (en) * 2013-11-04 2015-05-07 Xerox Corporation Method for object size calibration to aid vehicle detection for video-based on-street parking technology
CN108052260A (en) * 2017-11-29 2018-05-18 努比亚技术有限公司 Mobile terminal operation response method, mobile terminal and readable storage medium storing program for executing
FR3080075A1 (en) * 2018-04-13 2019-10-18 Renault S.A.S. METHOD AND SYSTEM FOR ASSISTING THE DRIVING OF A VEHICLE
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111784857A (en) * 2020-06-22 2020-10-16 浙江大华技术股份有限公司 Parking space management method and device and computer storage medium
CN112053397A (en) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111957040A (en) * 2020-09-07 2020-11-20 网易(杭州)网络有限公司 Method and device for detecting shielding position, processor and electronic device
CN113239912A (en) * 2021-07-13 2021-08-10 天津所托瑞安汽车科技有限公司 Method, device and storage medium for determining BSD image effective area

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612194A (en) * 2023-07-20 2023-08-18 天津所托瑞安汽车科技有限公司 Position relation determining method, device, equipment and storage medium
CN116612194B (en) * 2023-07-20 2023-10-20 天津所托瑞安汽车科技有限公司 Position relation determining method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113706608B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
US20200193721A1 (en) Method for providing parking service using image grouping-based vehicle identification
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
CN108877269B (en) Intersection vehicle state detection and V2X broadcasting method
WO2009131210A1 (en) Object recognizing device and object recognizing method
US11738747B2 (en) Server device and vehicle
CN111613088A (en) Parking charging management system and method
CN113997931A (en) Bird's-eye view image generation device, bird's-eye view image generation system, and automatic parking device
JP2001216519A (en) Traffic monitor device
CN109389622B (en) Vehicle tracking method, device, identification equipment and storage medium
US11482007B2 (en) Event-based vehicle pose estimation using monochromatic imaging
CN111145369A (en) Switch scheduling method, vehicle charging method, industrial personal computer and vehicle charging system
CN105144260A (en) Method and device for detecting variable-message signs
CN110991215A (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN113706608A (en) Pose detection device and method for target object in predetermined area and electronic equipment
CN113688717A (en) Image recognition method and device and electronic equipment
CN113420714A (en) Collected image reporting method and device and electronic equipment
CN117152453A (en) Road disease detection method, device, electronic equipment and storage medium
CN115952531A (en) Image processing method, device, equipment and storage medium
JP6664411B2 (en) Security device, security control method, program, and storage medium
JP2021052237A (en) Deposit detection device and deposit detection method
CN113830081B (en) Automatic parking method and device based on fusion positioning and storage medium
Choi et al. State Machine and Downhill Simplex Approach for Vision‐Based Nighttime Vehicle Detection
US11182627B2 (en) Image processing device and image processing method
CN117133079A (en) Control method, corresponding vehicle, electronic equipment and storage medium
CN114495065A (en) Target object identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant