CN113706608B - Pose detection device and method of target object in preset area and electronic equipment - Google Patents

Pose detection device and method of target object in preset area and electronic equipment Download PDF

Info

Publication number
CN113706608B
CN113706608B CN202110958929.5A CN202110958929A CN113706608B CN 113706608 B CN113706608 B CN 113706608B CN 202110958929 A CN202110958929 A CN 202110958929A CN 113706608 B CN113706608 B CN 113706608B
Authority
CN
China
Prior art keywords
predetermined area
boundary line
virtual frame
target object
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110958929.5A
Other languages
Chinese (zh)
Other versions
CN113706608A (en
Inventor
马志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunxiang Shanghai Intelligent Technology Co ltd
Original Assignee
Yunxiang Shanghai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunxiang Shanghai Intelligent Technology Co ltd filed Critical Yunxiang Shanghai Intelligent Technology Co ltd
Priority to CN202110958929.5A priority Critical patent/CN113706608B/en
Publication of CN113706608A publication Critical patent/CN113706608A/en
Application granted granted Critical
Publication of CN113706608B publication Critical patent/CN113706608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pose detection device and method of a target object in a preset area and electronic equipment, wherein the pose detection method of the target object comprises the following steps: acquiring an image in a preset area in real time; determining the relative position between a virtual frame framed in the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line; and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.

Description

Pose detection device and method of target object in preset area and electronic equipment
Technical Field
The present application relates to the field of target detection technologies, and in particular, to a pose detection device, a pose detection method, and an electronic device for a target in a predetermined area.
Background
At present, roadside intelligent parking is an important ring of intelligent cities, and at present, detection of parking is mainly based on detection means such as video and geomagnetism. The judgment of the parking behavior of the vehicle is mainly based on the geomagnetic parking space or based on a camera installed on the roadside. Although geomagnetic detection accuracy is high, geomagnetic detection has a number of drawbacks.
The geomagnetic detection principle is to detect whether a metal object exists in a parking space for detecting the change of the earth magnetic field. But cannot be determined to be a motor vehicle, i.e. the properties of the object cannot be determined. Secondly, by adopting the method, holes are required to be drilled in each parking space, and the embedded wireless geomagnetic detector is installed, so that the pavement corresponding to each parking space is required to be damaged. After each parking space is embedded with a wireless geomagnetic detector, when a vehicle is parked on the parking space, geomagnetism detects a geomagnetic field deflection signal, the geomagnetic field deflection signal is processed and then sent to a roadside relay receiver, and the relay is sent to a server background and other devices, so that the detection of the vehicle is realized.
In addition, the road side vehicles detected and managed by geomagnetism have the following defects: the battery needs to be replaced periodically, the battery needs to be replaced to form high pollution, a relay receiver needs to be erected, the cost is high, wireless is easy to interfere, and the relay receiver is easy to damage or needs to be taken out when the pavement is maintained.
For the roadside parking detection and management in the prior art through a video detection mode, the current video detection can only detect 2 parking spaces. Due to the problem of the visual angle of the camera, the more the vehicle overlap degree is, the more the vehicle behavior difficulty is judged. So that each camera usually judges 2-3 parking spaces, and a plurality of cameras are integrated together for judging. In addition, the roadside parking is detected and managed in a video detection mode, so that a road is required to be excavated, and the subsequent installation of the camera is facilitated.
Disclosure of Invention
An advantage of the present application is to provide a pose detection device, method, and electronic apparatus of an object in a predetermined area, in which poses of a plurality of objects with respect to the predetermined area can be detected by a position detection method of the object in the predetermined area. Preferably, the object is preferably a vehicle.
An advantage of the present application is to provide a pose detection device, method, and electronic apparatus of an object in a predetermined area, in which a plurality of holes for installing geomagnetism do not need to be excavated in the predetermined area when detecting a pose of the object with respect to the predetermined area by a position detection method of the object in the predetermined area.
Another advantage of the present application is to provide a pose detection device, method, and electronic apparatus for detecting a target object in a predetermined area, wherein when detecting poses of a plurality of target objects with respect to the predetermined area by a position detection method for a target object in the predetermined area, it is possible to avoid an influence of overlapping positions of the target objects on a detection result.
Another advantage of the present application is to provide a pose detection device, method, and electronic apparatus for an object in a predetermined area, wherein when detecting poses of a plurality of objects by a pose detection method for an object in the predetermined area, it can be determined whether the object is to be moved out of the predetermined area or into the predetermined area.
Another advantage of the present application is to provide a pose detection apparatus, method, and electronic device for a target object in a predetermined area, wherein a plurality of the target objects can be determined by the pose detection method for the target object in the predetermined area
In order to achieve at least one of the above advantages of the present application, the present application provides a method for detecting a pose of a target object in a predetermined area, the method for detecting a pose of a target object comprising:
acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framed in the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.
According to an embodiment of the present application, determining, by an image recognition algorithm, a relative position between a virtual frame framed in the object and a boundary line of the predetermined area in each frame of image includes:
determining parameters of the virtual frame through a Yolo algorithm;
and determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line.
According to an embodiment of the present application, determining a relative position between a virtual frame of the object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line includes:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the preset area;
wherein determining the pose of the target object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and judging the position change trend of the target relative to the preset area according to the length change trend of the line segment, where the bottom edge of the virtual frame is located, in the preset area, which is intersected with the boundary line.
According to an embodiment of the present application, if the length of the line segment of the virtual frame intersecting the boundary line in the predetermined area is gradually reduced as a whole, it is determined that the target object is away from the predetermined area, if the length of the line segment of the virtual frame intersecting the boundary line in the predetermined area is gradually reduced as a whole, it is determined that the target object is to enter the predetermined area, and if the length of the line segment of the virtual frame intersecting the boundary line in the predetermined area is unchanged as a whole, it is determined that the target object is always kept in the predetermined area.
According to an embodiment of the present application, determining a relative position between a virtual frame of the object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line includes:
according to the length change trend that the line segment of the bottom edge on the virtual frame intersected with the boundary line is located outside the preset area;
wherein determining the pose of the target object outside the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and judging the position change trend of the target relative to the preset area according to the length change trend of the line segment, where the bottom edge of the virtual frame is located, on the intersection of the boundary line and the preset area.
According to an embodiment of the present application, if the length of the line segment of the virtual frame intersecting the boundary line, in which the bottom edge is located, is gradually reduced in the whole outside of the predetermined area, it is determined that the target object is to enter the predetermined area, if the length of the line segment of the virtual frame intersecting the boundary line, in which the bottom edge is located, is gradually reduced in the whole outside of the predetermined area, it is determined that the target object is to be away from the predetermined area, and if the length of the line segment of the virtual frame intersecting the boundary line, in which the bottom edge is located, is not changed in the whole outside of the predetermined area, it is determined that the target object is always kept in the predetermined area.
According to an embodiment of the present application, if the virtual frame and the boundary line do not intersect and are outside the boundary line, it is determined that the target object does not enter the predetermined area.
According to one aspect of the present application, there is provided a charging method in which the target remains in a predetermined area, the charging method including:
the above method for detecting the pose of the target object in any predetermined area; and
and determining the required expense for the target object to stay in the preset area according to the pose of the target object in the preset area or the time period of the pose in the preset area as the charging time period and the charging rule of the time period.
According to one aspect of the present application, there is provided a pose detection device of an object in a predetermined area, the device comprising:
the acquisition module is used for acquiring a plurality of images of the preset area;
a processing module, wherein the processing module is arranged to be communicatively connected to the acquisition module and is arranged to determine, by means of an image recognition algorithm, a relative position between a virtual frame framed in each frame of image by the object and a boundary line of the predetermined area, wherein the relative positional relationship of the virtual frame and the boundary line is determined by a relative position of a bottom edge of the virtual frame and the boundary line;
an output module, wherein the output module is communicatively connected to the processing module, wherein the output module is configured to determine a pose of the object within the predetermined area based on a relative position between the virtual frame and the boundary line in each frame of image.
According to another aspect of the present application, there is provided an electronic apparatus including:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device, cause the device to perform the method of any of the preceding claims.
Drawings
Fig. 1 shows a flowchart of a method for detecting the pose of a target object in a predetermined area according to the present application.
Fig. 2 shows a schematic diagram of detecting a vehicle relative to a parking space by a pose detection method of an object in the predetermined area.
Fig. 3A and 3B are schematic views respectively showing two states of detecting a vehicle relative to a parking space by the pose detection method of the object in the predetermined area according to the present application.
Fig. 4A and 4B are schematic diagrams illustrating detection of two states of a vehicle relative to a parking space by the method for detecting the pose of the target object in the predetermined area according to another embodiment of the present application.
Fig. 5 shows a block diagram of the pose detection device of the target object in the predetermined area according to the present application.
Fig. 6 shows a block diagram of the electronic device according to the application.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art. The basic principles of the application defined in the following description may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the application.
Referring to fig. 1 to 6, a method for detecting the pose of an object in a predetermined area according to a preferred embodiment of the present application will be described in detail below.
The position detection method of the target object in the predetermined area can detect the pose of a plurality of target objects, particularly vehicles, relative to the predetermined area. That is, by the pose detection method of the target object in the predetermined area, it is possible to determine whether the vehicle is to enter the predetermined area, stay in the predetermined area, and it is possible to effectively avoid the influence of the overlapping of the positions of the plurality of target objects on the detection result.
Exemplary method for detecting position of target in predetermined region
The method for detecting the pose of the target object in the preset area comprises the following steps:
s101, acquiring an image in a preset area in real time;
in a preferred embodiment, the image within the predetermined area comprises an image of all objects resting within the predetermined area, and the predetermined area is formed by at least one boundary line 300 and/or a boundary formed by a real scene corresponding to the image.
Particularly when the pose detection method of the target object in the preset area is used for detecting the parking of the vehicle, and the preset area is formed by a plurality of parking spaces. At this time, the boundary line 300 corresponds to the boundary line 300 on the side of the parking space away from the curb boundary in the real scene, as shown in fig. 1 and 2.
In order to enable those skilled in the art to understand the present application, at least one embodiment of the present application will be described by taking an example in which the predetermined area is implemented as an area formed corresponding to a plurality of parking spaces on the roadside, and the boundary line 300 of the predetermined area is described by taking an example in which the boundary line on the side away from the boundary of the curb.
Preferably, the predetermined area further includes a plurality of sub-areas, and the live action corresponding to each sub-area is used for stopping in a target object with a predetermined size. Preferably, the corresponding live action of each sub-zone is used to park the vehicle. In other words, each of the sub-areas corresponds to one parking space, as shown in fig. 2.
Further, the image within the predetermined area may be obtained by at least one image acquisition device disposed in the vicinity of the predetermined area. Preferably, the image within the predetermined area may be obtained by one image acquisition apparatus disposed in the vicinity of the predetermined area.
The pose detection method of the target object in the preset area further comprises the following steps:
s102, determining the relative position between a virtual frame 400 framed in the target object and a boundary line 300 of the preset area in each frame image through an image recognition algorithm, wherein the relative position relation between the virtual frame 400 and the boundary line 300 is determined through the relative position between the bottom edge of the virtual frame 400 and the boundary line 300.
It should be noted that, by using an image recognition algorithm, a virtual frame 400 that frames the target object may be formed on the target object that at least partially overlaps the predetermined area, and parameters of the virtual frame 400 may be determined. Preferably, the virtual box 400 and the parameters of the virtual box 400 may be determined by a Yolo algorithm. It will be appreciated by those skilled in the art that the parameters defining the virtual box 400 and the virtual box 400 may also be obtained by other image algorithms.
After the parameters of the virtual frame 400 are determined, the pose of the target object can be determined according to the relative position between the virtual frame 400 and the boundary line 300.
In one embodiment, determining the relative position between the virtual frame 400 of the object and the boundary line 300 of the predetermined area according to the parameter of the virtual frame 400 and the parameter of the boundary line 300 includes:
s1021, according to the length variation trend of the line segment where the bottom edge on the virtual frame 400 intersects with the boundary line 300 is located in the predetermined area.
Specifically, if the length of the line segment of the bottom edge of the virtual frame 400 intersecting the boundary line 300 in the predetermined area is gradually reduced as a whole, it is determined that the target object is far from the predetermined area, if the length of the line segment of the bottom edge of the virtual frame 400 intersecting the boundary line 300 in the predetermined area is gradually reduced as a whole, it is determined that the target object is to enter the predetermined area, and if the length of the line segment of the bottom edge of the virtual frame 400 intersecting the boundary line 300 in the predetermined area is not changed as a whole, it is determined that the target object is always kept in the predetermined area.
Referring to fig. 3A and 3B, a vehicle as the object is stopped in the predetermined area, and a plurality of vehicles are stopped in a predetermined manner in a sub-area of the predetermined area.
It will be appreciated that when the image of the predetermined area is obtained by a single camera, the virtual frame 400 framed in the vehicle will not be affected by the overlapping portion of the vehicle, although the positions of the vehicles overlap. When one of the vehicles is driving into the sub-area, the bottom edge of the virtual frame 400 framed into the vehicle can intersect with the boundary line 300.
In addition, by analyzing the length variation trend of the line segment where the bottom edge on the virtual frame 400 intersected by the boundary line 300 is located in the predetermined area, the pose of the corresponding vehicle relative to the sub-area can be determined.
For example, when the length of the line segment of the virtual frame 400 intersecting the boundary line 300, which is located in the predetermined area, is gradually reduced as a whole until the vehicle is stable, the vehicle is relatively driven away from the predetermined area. In other words, at this time, the vehicle is to leave the parking space.
Conversely, the length of the line segment of the virtual frame 400 intersecting the boundary line 300, which is located at the bottom edge of the virtual frame 400, is gradually increased in the predetermined area as a whole until the vehicle is stabilized to a non-zero value, and the vehicle is relatively driven into the predetermined area. In other words, at this time, the vehicle is to be driven into a parking space.
And when the length of the line segment of the virtual frame 400 intersecting with the boundary line 300, where the line segment is located in the predetermined area, is 0, the vehicle does not enter the predetermined area, i.e. does not stop in the parking space corresponding to the sub-area.
Referring to fig. 4A and 4B, in another embodiment, determining a relative position between the virtual frame 400 of the object and the boundary line 300 of the predetermined area according to the parameter of the virtual frame 400 and the parameter of the boundary line 300 includes:
s1022, according to the length variation trend of the line segment where the bottom edge of the virtual frame 400 intersects with the boundary line 300, the line segment is located outside the predetermined area.
The pose detection method of the target object in the preset area further comprises the following steps:
s103, determining the pose of the target object in the preset area according to the relative position between the virtual frame 400 and the boundary line 300 in each frame of image.
Specifically, if the length of the line segment of the virtual frame 400 intersecting the boundary line 300, in which the bottom line segment is located, is gradually reduced in the whole outside of the predetermined area, it is determined that the target object is to enter the predetermined area, if the length of the line segment of the virtual frame 400 intersecting the boundary line 300, in which the bottom line segment is located, is gradually reduced in the whole outside of the predetermined area, it is determined that the target object is to be away from the predetermined area, and if the length of the line segment of the virtual frame 400 intersecting the boundary line 300, in which the bottom line segment is located, is not changed in the whole outside of the predetermined area, it is determined that the target object is always kept in the predetermined area.
Referring to fig. 3, a vehicle as the object is stopped in the predetermined area, and a plurality of vehicles are stopped in a predetermined manner in a sub-area of the predetermined area.
It will be appreciated that when the image of the predetermined area is obtained by a single camera, the virtual frame 400 framed in the vehicle will not be affected by the overlapping portion of the vehicle, although the positions of the vehicles overlap. When one of the vehicles is driving into the sub-area, the bottom edge of the virtual frame 400 framed into the vehicle can intersect with the boundary line 300.
In addition, by analyzing the length variation trend of the line segment where the bottom edge on the virtual frame 400 intersected by the boundary line 300 is located outside the predetermined area, the pose of the corresponding vehicle with respect to the sub-area can be determined.
For example, when the length of the line segment of the virtual frame 400 intersecting the boundary line 300, which is located at the bottom edge of the virtual frame 400, is located outside the predetermined area, is gradually reduced as a whole until the vehicle is stable, the vehicle is relatively driven into the predetermined area. In other words, at this time, the vehicle is to be driven into a parking space.
Conversely, the length of the line segment of the virtual frame 400 intersecting the boundary line 300, which is located at the bottom edge of the virtual frame 400, is located outside the predetermined area, gradually increases as a whole, until the vehicle is stabilized to a non-zero value, and the vehicle is relatively driven away from the predetermined area. In other words, at this time, the vehicle is to leave the parking space.
And when the length of the line segment of the virtual frame 400 intersecting with the boundary line 300, which is located at the bottom edge of the vehicle and located outside the predetermined area, is 0, the vehicle does not enter the predetermined area, i.e. does not stop in the parking space corresponding to the sub-area.
It can be understood that, since the size of the virtual frame 400 may be reduced along with the reduction of the volume of the vehicle, and the virtual frame 400 may be always framed into the vehicle, by analyzing the features of the bottom edge of the virtual frame 400, it is possible to determine the relative positional relationship of the vehicle with respect to the sub-area of the predetermined area, that is, the parking space, on the premise that the influence of the overlapping of the positions of the plurality of objects on the detection result is avoided.
According to another aspect of the present application, there is provided a charging method, wherein the charging method is used to calculate a fee for a parking space, which is a sub-area where a vehicle stays in the predetermined area.
Specifically, the charging method includes: the pose detection method of the target object in the preset area; and
and determining the required expense for the target object to stay in the preset area according to the time period of the target object, which is kept unchanged in the preset area, as the charging time period and according to the charging rule of the time period.
The charging rules may be customized according to the user, for example: the charging rules may specify that no charging is performed for half an hour, after which more than 30 elements are charged per more than one hour, in which case the pose remains unchanged within the predetermined area. It should be noted that the charging rule may be customized according to the user, and the embodiment is not limited.
Exemplary device for detecting the position of an object within a predetermined area
As shown in fig. 5, an embodiment of the present application provides a position detection apparatus 100 for an object in a predetermined area, where the apparatus 100 includes:
an acquisition module 10 for acquiring a plurality of images of the predetermined area;
a processing module 20, wherein the processing module 20 is arranged to be communicatively connected to the acquisition module 10 and is arranged to determine, by means of an image recognition algorithm, a relative position between a virtual frame framing the object in each frame of image and a boundary line of the predetermined area, wherein the relative positional relationship of the virtual frame and the boundary line is determined by means of a relative position of a bottom edge of the virtual frame and the boundary line;
an output module 30, wherein the output module 30 is communicatively connected to the processing module 30, wherein the output module 30 is arranged to determine the pose of the object within the predetermined area based on the relative position between the virtual frame and the boundary line in each frame of image.
Preferably, the processing module 20 is further configured to determine a parameter of the virtual frame by means of a Yolo algorithm and determine a relative position between the virtual frame of the object and the boundary line of the predetermined area based on the parameter of the virtual frame and the parameter of the boundary line.
Preferably, the processing module 20 is further configured to determine a length change trend of the line segment where the bottom edge on the virtual frame intersects with the boundary line is located in the predetermined area. Accordingly, the output module 30 is configured to output a trend of a position change of the target with respect to the predetermined area according to a trend of a length change of a line segment where a bottom edge on the virtual frame intersecting the boundary line is located within the predetermined area.
Specifically, if the length of the line segment where the bottom edge on the virtual frame intersecting the boundary line is located in the predetermined area is gradually reduced as a whole, the output module 30 outputs the result that the target object is away from the predetermined area, if the length of the line segment where the bottom edge on the virtual frame intersecting the boundary line is located in the predetermined area is gradually reduced as a whole, the output module 30 outputs the result that the target object is to enter the predetermined area, and if the length of the line segment where the bottom edge on the virtual frame intersecting the boundary line is located in the predetermined area is not changed as a whole, the output module 30 outputs the result that the target object is always kept in the predetermined area.
In another embodiment, the processing module 20 is configured to determine a length change trend of the line segment, where the bottom edge of the virtual frame intersects with the boundary line, is located outside the predetermined area; wherein the output module 30 is configured to determine a positional change tendency of the target with respect to the predetermined area based on a length change tendency of a line segment, at which a bottom edge on the virtual frame intersects with the boundary line, is located outside the predetermined area.
Specifically, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is gradually reduced outside the predetermined area, the output module 30 outputs that the target object is to enter the predetermined area, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is gradually reduced outside the predetermined area, the output module 30 outputs that the target object is to be far away from the predetermined area, and if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is unchanged outside the predetermined area, the output module 30 outputs that the target object is always kept in the predetermined area.
Exemplary electronic device
Fig. 6 is a schematic structural diagram of an embodiment of an electronic device according to the present application, as shown in fig. 6, where the electronic device may include: one or more processors; a memory; and one or more computer programs.
The electronic device may be a computer, a server, a mobile terminal (mobile phone), a cashing device, a computer, an intelligent screen, an unmanned aerial vehicle, an intelligent network vehicle (Intelligent Connected Vehicle; hereinafter abbreviated as ICV), an intelligent vehicle (smart/intellingent car) or a vehicle-mounted device.
Wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the device, cause the device to perform the steps of:
acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framed in the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
and determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image.
Determining, by an image recognition algorithm, a relative position between a virtual frame framed in the object and a boundary line of the predetermined area in each frame of image, including:
determining parameters of the virtual frame through a Yolo algorithm;
and determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line.
Determining a relative position between the virtual frame of the object and the boundary line of the predetermined area according to the parameters of the virtual frame and the parameters of the boundary line, including:
according to the length change trend of the line segment where the bottom edge on the virtual frame intersected with the boundary line is located in the preset area;
wherein determining the pose of the target object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and judging the position change trend of the target relative to the preset area according to the length change trend of the line segment, where the bottom edge of the virtual frame is located, in the preset area, which is intersected with the boundary line.
Specifically, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line in the predetermined area is gradually reduced as a whole, it is determined that the target object is far from the predetermined area, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line in the predetermined area is gradually reduced as a whole, it is determined that the target object is to enter the predetermined area, and if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line in the predetermined area is not changed as a whole, it is determined that the target object is always kept in the predetermined area.
In another embodiment, determining the relative position between the virtual frame of the object and the boundary line of the predetermined area according to the parameter of the virtual frame and the parameter of the boundary line includes:
according to the length change trend that the line segment of the bottom edge on the virtual frame intersected with the boundary line is located outside the preset area;
wherein determining the pose of the target object outside the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises:
and judging the position change trend of the target relative to the preset area according to the length change trend of the line segment, where the bottom edge of the virtual frame is located, on the intersection of the boundary line and the preset area.
Specifically, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is gradually reduced in the whole outside the predetermined area, it is determined that the target object is to enter the predetermined area, if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is gradually reduced in the whole outside the predetermined area, it is determined that the target object is to be far away from the predetermined area, and if the length of the line segment of the bottom edge of the virtual frame intersecting the boundary line is unchanged in the whole outside the predetermined area, it is determined that the target object is always kept in the predetermined area.
Preferably, if the virtual frame and the boundary line do not intersect, it is determined that the target object does not enter the predetermined area.
The one or more computer programs are stored in the memory, the one or more computer programs comprising instructions that, when executed by the device, cause the device to perform a billing method comprising:
the above method for detecting the pose of the target object in any predetermined area; and
and determining the required expense for the target object to stay in the preset area according to the charging time period which is taken as the charging time period and is the time period according to the charging rule of the time period.
The electronic device shown in fig. 6 may be a terminal device or a server, or may be a circuit device built in the terminal device or the server. The apparatus may be used to perform the functions/steps of the image recognition method provided by the embodiment of the application shown in fig. 1.
As shown in fig. 6, the electronic device 900 includes a processor 910 and a memory 920. Wherein the processor 910 and the memory 920 may communicate with each other via an internal connection, and transfer control and/or data signals, the memory 920 is configured to store a computer program, and the processor 910 is configured to call and run the computer program from the memory 920.
The memory 920 may be a read-only memory (ROM), other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media, or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
The processor 910 and the memory 920 may be combined into a single processing device, more commonly referred to as separate components, and the processor 910 is configured to execute program code stored in the memory 920 to perform the functions described above. In particular, the memory 920 may also be integrated into the processor 910 or may be separate from the processor 910.
It should be appreciated that the electronic device 900 shown in fig. 6 is capable of implementing various processes of the method provided by the embodiment of fig. 1 of the present application. The operations and/or functions of the respective modules in the electronic device 900 are respectively for implementing the corresponding flows in the above-described method embodiments. Reference is made in particular to the description of the embodiment of the method according to the application shown in fig. 1, and a detailed description is omitted here as appropriate for avoiding repetition.
In addition, in order to make the function of the electronic device 900 more complete, the electronic device 900 may further include one or more of a camera 930, a power supply 940, an input unit 950, and the like.
Optionally, a power supply 950 is used to provide power to various devices or circuits in the electronic device.
It should be understood that the processor 910 in the electronic device 900 shown in fig. 6 may be a system on a chip SOC, where the processor 910 may include a central processing unit (Central Processing Unit; hereinafter referred to as "CPU") and may further include other types of processors, such as: an image processor (Graphics Processing Unit; hereinafter referred to as GPU) and the like.
In general, portions of the processors or processing units within the processor 910 may cooperate to implement the preceding method flows, and corresponding software programs for the portions of the processors or processing units may be stored in the memory 920.
The present application also provides an electronic device, where the device includes a storage medium, which may be a nonvolatile storage medium, in which a computer executable program is stored, and a central processor connected to the nonvolatile storage medium and executing the computer executable program to implement the method provided by the embodiment shown in fig. 1 of the present application.
In the above embodiments, the processor may include, for example, a CPU, a DSP, a microcontroller, or a digital signal processor, and may further include a GPU, an embedded Neural Network Processor (NPU) and an image signal processor (Image Signal Processing; ISP), where the processor may further include a necessary hardware accelerator or a logic processing hardware circuit, such as an ASIC, or one or more integrated circuits for controlling the execution of the program according to the present application. Further, the processor may have a function of operating one or more software programs, which may be stored in a storage medium.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when run on a computer causes the computer to perform the method provided by the embodiment of the present application shown in fig. 1.
Embodiments of the present application also provide a computer program product comprising a computer program which, when run on a computer, causes the computer to perform the method provided by the embodiment of the present application shown in fig. 1.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present application, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (hereinafter referred to as ROM), a random access Memory (Random Access Memory) and various media capable of storing program codes such as a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present application, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present application, which should be covered by the present application. The protection scope of the present application shall be subject to the protection scope of the claims.
It will be appreciated by persons skilled in the art that the embodiments of the application described above and shown in the drawings are by way of example only and are not limiting. The advantages of the present application have been fully and properly realized. The functional and structural principles of the present application have been shown and described in the examples and embodiments of the application may be modified or practiced without departing from the principles described.

Claims (7)

1. The pose detection method of the target object in the preset area is characterized by comprising the following steps of: acquiring an image in a preset area in real time;
determining the relative position between a virtual frame framed in the target object and a boundary line of the preset area in each frame of image through an image recognition algorithm, wherein the relative position relation between the virtual frame and the boundary line is determined through the relative position between the bottom edge of the virtual frame and the boundary line;
determining the pose of the target object in the preset area according to the relative position between the virtual frame and the boundary line in each frame of image;
determining, by an image recognition algorithm, a relative position between a virtual frame framed in the object and a boundary line of the predetermined area in each frame of image, including: determining parameters of the virtual frame through a Yolo algorithm;
determining the relative position between the virtual frame of the target object and the boundary line of the preset area according to the parameters of the virtual frame and the parameters of the boundary line; determining a relative position between the virtual frame of the object and the boundary line of the predetermined area according to the parameters of the virtual frame and the parameters of the boundary line, including:
wherein determining the pose of the target object in the predetermined area according to the relative position between the virtual frame and the boundary line in each frame of image comprises: and judging the position change trend of the target relative to the preset area according to the length change trend of the line segment, where the bottom edge of the virtual frame is located, on the boundary line, in the preset area or outside the preset area.
2. The method according to claim 1, wherein if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line in the predetermined area is gradually decreased as a whole, it is determined that the target is away from the predetermined area, if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line in the predetermined area is gradually increased as a whole, it is determined that the target is to enter the predetermined area, and if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line in the predetermined area is not changed as a whole, it is determined that the target is always held in the predetermined area.
3. The pose detection method of an object in a predetermined area according to claim 1, wherein if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined area, is gradually decreased as a whole, it is determined that the object is to enter the predetermined area, if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined area, is gradually decreased as a whole, it is determined that the object is to be away from the predetermined area, and if a length of a line segment of a bottom edge of the virtual frame intersecting the boundary line, which is located outside the predetermined area, is not changed as a whole, it is determined that the object is to be always held in the predetermined area.
4. The method according to claim 1, wherein if the virtual frame and the boundary line do not intersect and are outside the boundary line, it is determined that the target does not enter the predetermined area.
5. A charging method, characterized in that the charging method in which the target object stays in a predetermined area includes: the pose detection method of a target object in a predetermined area according to any one of claims 1 to 4; and
and determining the required expense for the target object to stay in the preset area according to the pose of the target object in the preset area or the time period of the pose in the preset area as the charging time period and the charging rule of the time period.
6. A pose detection device of an object in a predetermined area for performing the pose detection method of an object in a predetermined area according to any of claims 1 to 4, characterized in that the device comprises:
the acquisition module is used for acquiring a plurality of images of the preset area;
a processing module, wherein the processing module is arranged to be communicatively connected to the acquisition module and is arranged to determine, by means of an image recognition algorithm, a relative position between a virtual frame framed in each frame of image by the object and a boundary line of the predetermined area, wherein the relative positional relationship of the virtual frame and the boundary line is determined by a relative position of a bottom edge of the virtual frame and the boundary line;
an output module, wherein the output module is communicatively connected to the processing module, wherein the output module is configured to determine a pose of the object within the predetermined area based on a relative position between the virtual frame and the boundary line in each frame of image.
7. An electronic device, comprising:
one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, the one or more computer programs comprising instructions, which when executed by the device, cause the device to perform the method of any of claims 1-4.
CN202110958929.5A 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment Active CN113706608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110958929.5A CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110958929.5A CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Publications (2)

Publication Number Publication Date
CN113706608A CN113706608A (en) 2021-11-26
CN113706608B true CN113706608B (en) 2023-11-28

Family

ID=78653613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110958929.5A Active CN113706608B (en) 2021-08-20 2021-08-20 Pose detection device and method of target object in preset area and electronic equipment

Country Status (1)

Country Link
CN (1) CN113706608B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612194B (en) * 2023-07-20 2023-10-20 天津所托瑞安汽车科技有限公司 Position relation determining method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013210942A (en) * 2012-03-30 2013-10-10 Fujitsu Ten Ltd Detection device and detection method
CN108052260A (en) * 2017-11-29 2018-05-18 努比亚技术有限公司 Mobile terminal operation response method, mobile terminal and readable storage medium storing program for executing
FR3080075A1 (en) * 2018-04-13 2019-10-18 Renault S.A.S. METHOD AND SYSTEM FOR ASSISTING THE DRIVING OF A VEHICLE
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111784857A (en) * 2020-06-22 2020-10-16 浙江大华技术股份有限公司 Parking space management method and device and computer storage medium
CN111957040A (en) * 2020-09-07 2020-11-20 网易(杭州)网络有限公司 Method and device for detecting shielding position, processor and electronic device
CN112053397A (en) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113239912A (en) * 2021-07-13 2021-08-10 天津所托瑞安汽车科技有限公司 Method, device and storage medium for determining BSD image effective area

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9449236B2 (en) * 2013-11-04 2016-09-20 Xerox Corporation Method for object size calibration to aid vehicle detection for video-based on-street parking technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013210942A (en) * 2012-03-30 2013-10-10 Fujitsu Ten Ltd Detection device and detection method
CN108052260A (en) * 2017-11-29 2018-05-18 努比亚技术有限公司 Mobile terminal operation response method, mobile terminal and readable storage medium storing program for executing
FR3080075A1 (en) * 2018-04-13 2019-10-18 Renault S.A.S. METHOD AND SYSTEM FOR ASSISTING THE DRIVING OF A VEHICLE
CN110688902A (en) * 2019-08-30 2020-01-14 智慧互通科技有限公司 Method and device for detecting vehicle area in parking space
CN111784857A (en) * 2020-06-22 2020-10-16 浙江大华技术股份有限公司 Parking space management method and device and computer storage medium
CN112053397A (en) * 2020-07-14 2020-12-08 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111957040A (en) * 2020-09-07 2020-11-20 网易(杭州)网络有限公司 Method and device for detecting shielding position, processor and electronic device
CN113239912A (en) * 2021-07-13 2021-08-10 天津所托瑞安汽车科技有限公司 Method, device and storage medium for determining BSD image effective area

Also Published As

Publication number Publication date
CN113706608A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN110430401B (en) Vehicle blind area early warning method, early warning device, MEC platform and storage medium
CN111339994B (en) Method and device for judging temporary illegal parking
CN111145369A (en) Switch scheduling method, vehicle charging method, industrial personal computer and vehicle charging system
CN113706608B (en) Pose detection device and method of target object in preset area and electronic equipment
US11482007B2 (en) Event-based vehicle pose estimation using monochromatic imaging
CN110991215A (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN114332821A (en) Decision information acquisition method, device, terminal and storage medium
CN111582239A (en) Violation monitoring method and device
CN116853292A (en) Collision detection method and device for unmanned vehicle
CN113688717A (en) Image recognition method and device and electronic equipment
CN113103957B (en) Blind area monitoring method and device, electronic equipment and storage medium
CN113420714A (en) Collected image reporting method and device and electronic equipment
CN111724607B (en) Steering lamp use detection method and device, computer equipment and storage medium
CN107463886B (en) Double-flash identification and vehicle obstacle avoidance method and system
JP2020095623A (en) Image processing device and image processing method
KR101731789B1 (en) ADAS controlling method using road recognition and control system
JP2020091893A (en) Security device
JP2020095631A (en) Image processing device and image processing method
CN115909235A (en) Method and device for identifying road gap, computer equipment and storage medium
US11182627B2 (en) Image processing device and image processing method
CN117133079A (en) Control method, corresponding vehicle, electronic equipment and storage medium
JP7359541B2 (en) Image processing device and image processing method
CN112257485A (en) Object detection method and device, storage medium and electronic equipment
CN112706159A (en) Robot control method and device and robot
CN115376216B (en) Misuse-preventing vehicle ETC passing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant