CN111712827A - Method, device and system for adjusting observation field, storage medium and mobile device - Google Patents

Method, device and system for adjusting observation field, storage medium and mobile device Download PDF

Info

Publication number
CN111712827A
CN111712827A CN201980012196.7A CN201980012196A CN111712827A CN 111712827 A CN111712827 A CN 111712827A CN 201980012196 A CN201980012196 A CN 201980012196A CN 111712827 A CN111712827 A CN 111712827A
Authority
CN
China
Prior art keywords
view
visual field
field
range
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980012196.7A
Other languages
Chinese (zh)
Inventor
封旭阳
张李亮
夏志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111712827A publication Critical patent/CN111712827A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides an adjusting method, an adjusting device and an adjusting system for an observation field of view, a storage medium and a mobile device, wherein the method is used for an automatic driving system of the mobile device and comprises the following steps: the method comprises the steps of obtaining motion information and an initial observation image (102) of the mobile device, then utilizing a current view range to perform cutting processing on the initial observation image to obtain a current view image (104), determining a target view range (106) according to the motion information and the current view image of the mobile device, and further adjusting an observation view to the target view range (108). The technical scheme provided by the embodiment of the invention can solve the problems of safety and limited application in the target observation process to a certain extent.

Description

Method, device and system for adjusting observation field, storage medium and mobile device
Technical Field
The invention relates to the technical field of intelligent transportation, in particular to an observation visual field adjusting method, an observation visual field adjusting device, an observation visual field adjusting system, a storage medium and a mobile device.
Background
The target detection technology in the automatic driving or the unmanned driving is concerned with whether an obstacle in the vehicle traveling direction can be accurately detected, directly affects the safety of vehicle traveling, and becomes a technical problem of great concern in the field.
In specific application, the traditional target detection technology based on visual images is difficult to efficiently and accurately process detection tasks containing both near targets and far targets in time. For example, when a target is detected for a remote object, the camera in the vehicle imaging system is generally kept at the same image resolution, and the angle of view of the lens is directly reduced, so that an image of the remote object can be acquired, and the remote object can also keep imaging of more pixels, which is convenient for further target detection and analysis for the vehicle. However, the above strategy for target detection may sacrifice part of the field of view, and the target in the sacrificed field of view cannot be collected by the vehicle imaging system, and the part of the target that cannot be collected by the imaging system may be an obstacle near the vehicle, so that the method of adjusting the field of view to detect the target may cause a safety hazard in the driving process of the vehicle to a certain extent.
If a wide view field range is maintained and a distant target object is to be recognized, the calculation amount and the algorithm complexity of the automatic driving system are increased, so that higher requirements are imposed on hardware and software of the automatic driving system, and the strategy is difficult to be widely applied to actual products for target detection.
Disclosure of Invention
The embodiment of the invention provides an observation field adjusting method, an observation field adjusting device, an observation field adjusting system, a storage medium and a mobile device, and aims to solve the problems of safety and limited application in the target observation process.
In a first aspect, an embodiment of the present invention provides an adjustment method for an observation field of view, which is used in an automatic driving system of a mobile device, and includes:
acquiring motion information and an initial observation image of the mobile device;
cutting the initial observation image by using the current view range to obtain a current view image;
determining a target view range according to the motion information of the mobile device and the current view image;
and adjusting the observation visual field to the target visual field range.
In a second aspect, an embodiment of the present invention provides an adjusting device for an observation field of view, which is used in an automatic driving system of a mobile device, and includes:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a third aspect, an embodiment of the present invention provides an observation field adjustment system, including:
the image acquisition device is used for acquiring an initial observation image;
adjusting means for the observation field of view for performing the method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored,
the computer program is executed by a processor to implement the method as described in the first aspect.
In a fifth aspect, an embodiment of the present invention provides an automatic driving system, including:
the image acquisition device is used for acquiring an initial observation image;
an adjustment device of the observation field of view for performing the method according to the first aspect;
and the automatic driving control device is used for controlling the mobile device to automatically drive according to the target visual field range.
In a sixth aspect, an embodiment of the present invention provides a mobile device, including:
a mobile device main body;
the automatic driving system according to the fifth aspect.
In the technical scheme provided by the embodiment of the invention, the initial observation image comprises all objects which can be observed currently, the visual field image is obtained by cutting the initial observation image by utilizing the visual field range, and the dynamic adjustment of the target visual field range is realized by combining the current visual field image and the motion information of the mobile device and realizing the timely determination and adjustment of the visual field range according to the actual motion condition of the mobile device and the current observation result, so that the visual field sacrifice degree in the observation process can be reduced, the safety risk caused by the sacrifice of the visual field is reduced, and the safety problem in the existing target observation process is solved to a certain extent; moreover, compared with an implementation scheme of keeping a wide view field range and identifying a distant target object, the view field dynamic adjustment scheme provided by the embodiment of the invention has low requirements on hardware and software of an automatic driving system, is more beneficial to wide application of actual products, and solves the problem of application limitation in the existing target observation process to a certain extent.
Drawings
Fig. 1 is a schematic flow chart illustrating an adjusting method for an observation field according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating another method for adjusting an observation field according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating another method for adjusting an observation field according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another method for adjusting an observation field according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another method for adjusting an observation field according to an embodiment of the present invention;
FIG. 6 is a block diagram of an adjusting apparatus for observing a field of view according to an embodiment of the present invention;
fig. 7 is a schematic physical structure diagram of an adjusting apparatus for observing a field of view according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating an adjusting system for an observation field according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an embodiment of an autopilot system;
fig. 10 is a schematic structural diagram of a mobile device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The mobile device of the present invention may be a vehicle, a ship, or an unmanned aerial vehicle. The specific application scenario of this embodiment is a target detection scenario in a driving or parking process of a vehicle, and further, may specifically be a target detection scenario in a driving or obstacle avoidance scenario of an unmanned vehicle.
The target detection technology is an important technology for automatically driving and sensing the external environment by an unmanned aerial vehicle system. The further an object is from the camera in the real world, the smaller its image. The size of the object pixel that can be detected by the target detection technology based on visual images is limited, that is, the distance that can be seen by the target detection technology is limited. In order to be able to see farther, i.e. to observe distant objects, it is common to increase the complexity of the detection algorithm model or to reduce the field of view without changing the image resolution. However, the method for increasing the complexity of the detection algorithm model increases the calculation amount and complexity of the algorithm, has high requirements on software and hardware equipment, cannot meet the real-time requirement of target detection, and is limited in application; and the scheme of reducing the lens visual field leads to the sacrifice of partial visual field, so that the mobile device cannot acquire the obstacles of the sacrifice part in the driving process, and greater potential safety hazard exists.
The technical scheme provided by the invention aims to solve the technical problems in the prior art and provides the following solving ideas: specifically, the image visual field is confirmed and automatically adjusted according to the motion information of a mobile device and the detection result of the current visual field range, and the self-adaptive switching of the image visual field is realized.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Example one
The embodiment of the invention provides an adjusting method of an observation visual field, which can be particularly used for an automatic driving system of a vehicle. Specifically, referring to fig. 1, the method includes the following steps:
s102, obtaining motion information and an initial observation image of the mobile device.
In one aspect, the motion information of the mobile device is used to describe the motion condition of the mobile device, and in particular implementation, it may include but is not limited to: a travel speed of the mobile device, an acceleration of the mobile device.
When the motion information of the mobile device is specifically acquired, a speed measuring device may be integrated in an execution main body of the method for adjusting the observation field of view (for convenience of description, hereinafter, referred to as an adjustment device of the observation field of view) to directly acquire the motion information of the mobile device; alternatively, the motion information may be acquired by interacting with a mobile device controller, such as a main controller or a controller for controlling driving, to acquire the motion information acquired by the mobile device controller.
On the other hand, the initial observation image contains a complete image which can be acquired by the current image acquisition equipment of the mobile device, and the image can be actually used as a full-view-range image.
When the scheme is specifically realized, the image acquisition device for acquiring the initial observation image can be realized by utilizing the existing hardware equipment. More specifically, the image capturing device may be implemented by a hardware device with a higher resolution, so that the image capturing device may still have a certain resolution in different visual fields.
In one possible design, the imaging field of view of the image capture device may be 120 DFOV. And/or, in one possible design, the imaging size of the image capture device is 3840 × 2160.
And S104, utilizing the current view range to cut the initial observation image to obtain a current view image.
In the embodiment of the present invention, a plurality of different viewing ranges are designed, and any one of the viewing ranges corresponds to a unique field of view (FOV), in other words, the FOVs of any two viewing ranges are different.
The observation image is obtained by cropping the initial observation image using the field of view, that is, by cropping the initial observation image using the FOV corresponding to the field of view. It is understood that the view field images obtained by cutting the initial observation image in the plurality of different view field ranges are different from each other.
In a specific implementation process, the view range can be designed on the basis of the image data, and the corresponding relation between the view range and the FOV is established, so that one view range corresponds to a unique FOV and a unique view range and view image. Specifically, a correspondence relationship may be established in which the field of view position is gradually increased as the FOV decreases; alternatively, a correspondence relationship may be established in which the field of view position decreases with decreasing FOV (or increases with increasing FOV).
Due to the design of the visual field gears, the subsequent adjustment aiming at the target visual field gears can be mapped to the adjustment of the visual field gears with the preset FOV value, the realization mode is more convenient, the timeliness of the observation adjustment scheme is improved, and the requirement of each function or scheme on the observation timeliness in the driving process of the mobile device is met.
And S106, determining a target view range according to the motion information of the mobile device and the current view image.
Specifically, according to the motion information of the mobile device, a first view range corresponding to the motion information of the mobile device may be acquired, and for the current view image, the object detection result of the current view image may be acquired, so that the target view range may be determined according to the first view range and the object detection result.
The first view range may be specifically used to describe a minimum view range to which the motion information of the mobile device may correspond. Of course, in a specific application scenario, it may be embodied as a minimum field of view. In one possible design, if a corresponding relationship that the view range is gradually increased along with the decrease of the FOV is pre-established, at this time, the maximum range corresponding to the motion information of the mobile device may be obtained; on the contrary, if the corresponding relationship that the view range is gradually decreased with the decrease of the FOV is pre-established, at this time, the minimum range corresponding to the motion information of the mobile device may be obtained.
And the object detection result of any image is used to detect whether the image contains an object, whether the image contains an object of interest, and the size of the object of interest.
The object to be considered may be preset in advance, and the number and the type of the objects to be considered may be at least one, which is not particularly limited in the embodiments of the present invention. In a specific unmanned driving scenario, taking a vehicle as an example, the object of interest may be a vehicle in front of the vehicle or an obstacle on a lane.
The size of the object of interest may be embodied as the size of the area in the image in which the object of interest is located. In one possible design, if the region of the object of interest is framed in a rectangular frame, the size of the region of the object of interest may be expressed as the width of the rectangular frame (or the length, the length of the diagonal line, etc. may be used to represent the size of the rectangular frame). In another possible design, if the region of the object of interest is outlined as a circular box, the size of the region of the object of interest may be expressed as the diameter (or radius) of the circular box.
In other words, the object detection result of the current view image acquired by this step may include, but is not limited to: whether the current view image contains the object of interest and the size of the region where the object of interest is located.
In addition, in the embodiment of the present invention, the object detection result of the current view image acquired here may be realized by an execution subject (for convenience of description, hereinafter, referred to as an observation view adjustment device) of the observation view adjustment method provided in the embodiment of the present invention executing object detection; alternatively, the detection result of the current view image by the other object detection device may be acquired by interacting with the other object detection device. Wherein the aforementioned interaction may comprise: active acquisition or passive reception.
And S108, adjusting the observation visual field to the target visual field range.
Specifically, the field of view is adjusted from the current FOV to the FOV corresponding to the target field of view, and subsequent observation or other processing is performed with the adjusted FOV.
In a possible design, if the corresponding relationship between the view range and the FOV is pre-established, the step is executed to adjust the observation view from the current view range to the target view range corresponding to the target view range.
More specifically, when this step is performed, the observation field of view may be gradually adjusted from the current field of view to the target field of view, or the observation field of view may be switched from the current field of view to the target field of view.
By the scheme, the dynamic adjustment of the target view range is realized, so that the view sacrifice degree in the observation process can be reduced, the safety risk caused by the sacrifice of the view is reduced, and the safety problem in the existing target observation process is solved to a certain extent; moreover, compared with an implementation scheme of keeping a wide view field range and identifying a distant target object, the view field dynamic adjustment scheme provided by the embodiment of the invention has low requirements on hardware and software of an automatic driving system, is more beneficial to wide application of actual products, and solves the problem of application limitation in the existing target observation process to a certain extent.
The following further describes a specific implementation manner of step S106 in the foregoing embodiments. Specifically, reference may be made to the implementation flow shown in fig. 2:
s1062, acquiring a first view range corresponding to the motion information of the mobile device.
One of the most important safety indicators during the driving of an autonomous vehicle is the time of collision of the closest object on the forward-looking road. In the running process of the vehicle, when the vehicle speed is constant, the average braking distance corresponding to the vehicle speed can be acquired. The object at the farthest distance that can be detected by the target detection algorithm has only a limited number of pixels in the image. For a comparable resolution imaging system, the number of imaging pixels of the subject in the image has a positive correlation with the field of view (or field of view position, FOV). Thus, the first field of view can be obtained by converting the minimum size of the region where the farthest target is located, which is required to be observed at the distance, into the minimum FOV.
In addition to real-time execution by using the aforementioned flow, in another implementation manner, a corresponding relationship between each view range (or view gear) and a motion information range (such as a vehicle speed range) of the mobile device may be established in advance based on the aforementioned algorithm, so that, when this step is executed, only which motion information range the motion information of the mobile device falls into needs to be determined, and the view range corresponding to the motion information range may be determined as the first view range. Due to the fact that the corresponding relation is established in advance, compared with a real-time processing scheme, the method greatly simplifies the data volume and processing steps, is more beneficial to improving the real-time performance of data processing, and is beneficial to meeting the timeliness requirement of an observation process.
And S1064, acquiring an object detection result of the current view image.
In one implementation, the object detection for the current view image may be performed for all objects, and when it is determined that the objects are included, it is further required to identify whether the objects include the object of interest and the size of the region where the object of interest is located. Wherein, identifying whether the object is the interested object can be realized by utilizing the technologies of image identification and the like; or, the operation information may be output to the user side, and the object indicated by the operation information is taken as the object of interest.
In another implementation, the object detection for the current view image may be performed only for the object of interest, and in this case, the obtained object detection result may include, but is not limited to: whether the current view image contains the object of interest and the size of the region where the object of interest is located. That is, the object detection is directly realized according to the preset interested object, and secondary identification is not needed after the detection is finished, so that the processing steps are simplified, and the timeliness of the processing is improved.
Further, in addition to the foregoing detection contents, the object detection result may further include: at least one of a location of the object in the image, an object class, and a class confidence.
S1066, determining a target adjusting condition according to the first view range, the object detection result and a preset view adjusting condition.
The step aims to determine a target adjustment condition by comparing a preset visual field adjustment condition with a first visual field range and the object detection result to determine the visual field adjustment condition which can be met by the first visual field range and the object detection result.
The visual field adjustment conditions according to the embodiment of the present invention may include: a visual field enlarging condition, a visual field reducing condition, and a visual field maintaining condition. The field-of-view enlarging condition is a condition for enlarging the FOV, the field-of-view reducing condition is a condition for reducing the FOV, and the field-of-view maintaining condition is a condition for maintaining the FOV unchanged.
In one possible design, if a corresponding relationship that the view range is gradually increased with decreasing FOV is pre-established, then the view range expansion condition, that is, the shift range reduction condition, the view range reduction condition, that is, the shift range increase condition, and the view range holding condition, that is, the shift range holding condition. On the contrary, in another design, if a corresponding relationship that the view range is gradually decreased with decreasing FOV is pre-established, the view range expanding condition is also a shift range increasing condition, the view range contracting condition is also a shift range decreasing condition, and the view range maintaining condition is also a shift range maintaining condition.
When the step determines the target adjustment condition, two sub-steps may be specifically included:
firstly, determining a current visual field adjusting condition which can be met currently according to a current visual field image and a first visual field range;
then, according to the current visual field adjustment condition which can be met, a target adjustment condition is determined.
Specifically, the embodiment of the present invention further provides a specific implementation manner of each of the foregoing visual field adjustment conditions, which is detailed in table 1.
TABLE 1
Figure BDA0002621632090000101
Figure BDA0002621632090000111
It should be noted that, the relationship between a and B in table 1 is identified by "greater than", "less than" or "equal to", and specifically, the "greater than" relationship is taken as an example, which means that a is greater than B, and the other size relationship identification manners are similar thereto, and will not be described in detail later. And "-" in table 1 is used to identify that discrimination is not required to be performed or that the discrimination result has no effect on the visual field adjustment result.
The size of the region where the maximum object of interest is located is projected into the specified view field, and the size of the maximum object of interest in the specified view field is the size of the maximum object of interest in the specified view field when the initial observation image is processed by using the specified view field to obtain the specified view field image and the maximum object of interest is projected into the processed specified view field image.
The designated visual field range referred to in the embodiments of the present invention is a visual field range larger than the current visual field range. Taking the pre-established correspondence that the view range is gradually increased along with the decrease of the FOV as an example, the view range may be the view range with the current view range one (or more) lower; or, the corresponding relationship that the view range is gradually increased with the increase of the FOV is the view range that is one (or more) higher than the current view range.
The preset safety size is used for indicating a minimum safety distance in the driving process of the mobile device, the minimum safety distance may be a preset fixed distance, and the preset safety size is a size of the minimum safety distance projected on the current visual field image.
Specifically, as shown in table 1, the visual field enlarging condition may include at least, but is not limited to, the following two cases:
1.1) the current field of view is smaller than the first field of view.
At this time, the current visual field range is too small to meet the requirement of the current motion information of the mobile device on the visual field range, so the visual field needs to be enlarged to meet the motion situation of the mobile device.
1.2) the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is projected into the specified visual field range is larger than a preset safety size; wherein the specified viewing range is greater than the current viewing range.
At this moment, the current field of view range can satisfy the minimum demand of mobile device motion to the FOV, but the object of interest is great, and, after continuing to increase the FOV, the size of the object of interest still can satisfy and predetermine safe size after diminishing, can satisfy the obstacle analysis demand to the object of interest, consequently, can expand the FOV to adapt to obstacle detection demand.
Specifically, as shown in table 1, the field of view reduction conditions may include, but are not limited to, at least the following two cases:
2.1) the current field of view is larger than the first field of view, and no object of interest is detected in the current field of view image.
At this time, the current FOV is relatively large and no obstacle is detected in the current FOV, and at this time, it is more necessary to pay priority to an obstacle in the vicinity of the mobile device, and therefore, the FOV can be appropriately reduced to meet the near observation requirement.
2.2) the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is located is smaller than a preset safety size.
At this time, the corresponding size of the interested object in the current FOV image is smaller than the preset safe size, which can meet the requirement of observing the obstacle, and the current FOV is larger and has a space to be continuously reduced, so that the FOV can be reduced to meet the requirement of observing the obstacle at a distance.
Specifically, as shown in table 1, the visual field maintaining conditions may include, but are not limited to, at least the following three cases:
3.1) the current field of view is equal to the first field of view, and no object of interest is detected in the current field of view image.
At this time, the current visual field range already corresponds to the minimum FOV that the motion information of the mobile device can correspond to, and at this time, no object of interest appears within a sufficiently long distance, so that the current visual field range is maintained without adjusting the visual field range.
3.2) the current visual field range is larger than or equal to the first visual field range, an interested object is detected in the current visual field image, the size of the area where the maximum interested object is located is larger than a preset safety size, and the size of the area where the maximum interested object is located projected into the specified visual field range is smaller than the preset safety size; wherein the specified viewing range is greater than the current viewing range.
At this time, it indicates that there is an object of interest in the vicinity of the mobile device, and the size of the object of interest after the object of interest is processed by the current FOV is larger than the preset safety size, but continuing to increase the FOV will cause the object of interest to become smaller to a size that is not favorable for performing obstacle analysis, so the existing field of view is suitable and not suitable to be adjusted again, and the current field of view is maintained.
3.3) the current visual field range is larger than or equal to the first visual field range, the object of interest is detected in the current visual field image, and the size of the area where the largest object of interest is located is equal to a preset safety size.
At this time, the current field of view range is greater than or equal to the minimum FOV corresponding to the motion information of the mobile device, which indicates that the object of interest is far enough from the mobile device, but the size of the area where the detected maximum object of interest is located is equal to the preset safety size, and then the FOV is continuously reduced and cannot meet the minimum requirement of the motion of the mobile device on the FOV, so that the field of view range does not need to be adjusted and maintained.
Based on the comparison between the current view image and the first view gear and the view adjusting conditions, the view adjusting conditions met by the current motion and observation conditions can be determined. Specifically, the comparison may be performed one by one according to the conditions described in table 1, and the comparison with each visual field adjustment condition is performed based on the determination, so as to determine the satisfied visual field adjustment condition; or, the visual field adjustment conditions can be compared one by one according to a certain sequence, if the visual field expansion conditions are compared firstly, if the visual field expansion conditions are met, the subsequent comparison is not carried out; otherwise, if not, comparing the visual field reduction condition; and sequentially judging other visual field adjusting conditions.
In determining the target adjustment condition, the final target adjustment condition may be determined in at least two ways:
a real-time processing scheme. That is, the view field adjustment condition satisfied by the current view field is acquired as the target adjustment condition from the first view field and the object detection result. The adjusting scheme is beneficial to realizing the real-time adjustment of the visual field range, thereby ensuring that the adjusted visual field range is really matched with the motion state and the observation state of the mobile device, and having higher observation accuracy and real-time performance.
Alternatively, the first and second electrodes may be,
a delayed processing scheme. That is: according to the first visual field range and the object detection result, the visual field adjusting conditions met by the current visual field range are obtained, and the visual field adjusting conditions met by at least one frame before the current frame are obtained, so that the meeting times of the visual field adjusting conditions are counted, the proportion of the visual field adjusting conditions meeting the highest times in the visual field adjusting conditions is obtained, and if the proportion is larger than or equal to a preset proportion threshold value, the visual field adjusting conditions meeting the highest times are determined as the target adjusting conditions.
The proportion threshold value can be set in a customized manner according to actual needs, and for example, can be preset to be 50%. When the scheme is specifically realized, the comparison determination is performed for each frame, and the number of times of satisfaction of each visual field adjustment condition is counted based on the determined visual field adjustment condition. For example, if the visual field expansion condition is satisfied, the number of times the visual field expansion condition is satisfied is increased by one, and other visual field adjustment conditions are similarly processed. Then, at each frame, the previous N-1 frames (N is an integer greater than 1) including the current frame may be acquired, and then the view field adjustment condition that will satisfy the greatest number of times among the N frames occurs a sufficient number of times (greater than or equal to a preset scale threshold), it is determined as the target adjustment condition, and the subsequent adjustment is performed.
Compared with the real-time processing scheme, the processing scheme gives a buffering time to the observation and adjustment process, and realizes the determination and subsequent switching of the FOV through multi-frame observation and identification, thereby avoiding the problems of resource occupation and the like caused by excessive adjustment times and having higher accuracy.
S1068, determining the target visual field range according to the current visual field range and the target adjustment condition.
In this step, the target view range may be determined according to a preset adjustment step.
In a possible implementation scenario, in a scenario in which a correspondence relationship between a view range and an increasing order of the FOV is preset, one (or a plurality of, not described again) of the gears may be used as an adjustment step, so that if it is determined that the target adjustment condition is a view expansion condition, one gear is reduced on the basis of a current view range corresponding to the current view range to be used as a target view range; or if the target adjustment condition is a view reduction condition, adding a gear on the basis of a current view gear corresponding to the current view range to serve as a target view gear; or, determining the current view range corresponding to the current view range as the target view range according to the view expansion condition. It can be known that, in a scene in which the correspondence between the visual field position and the FOV with the same increasing sequence is preset, the scheme is also applicable, but the determination of the target visual field position is opposite to the aforementioned increasing and decreasing manner, and is not described again.
In another possible implementation scenario, the FOV may be increased or decreased or maintained at the FOV angle corresponding to the current field of view directly from the preset angle to obtain the target FOV.
To facilitate understanding of the foregoing schemes, the embodiment of the present invention provides two possible implementations of the foregoing design, please refer to fig. 3 and fig. 4. Fig. 3 and 4 illustrate a mobile device as a vehicle.
Fig. 3 is a schematic flow chart illustrating another method for adjusting an observation field according to an embodiment of the present invention. As shown in fig. 3, the method includes:
s301, acquiring an initial observation image acquired by the current frame.
And S302, acquiring a maximum view gear corresponding to the current speed by using the current speed of the vehicle.
That is, the minimum FOV corresponding to the current vehicle speed, i.e., the first field of view.
And S303, utilizing the current view gear to cut the initial observation image to obtain the current view image.
And S304, carrying out object detection on the current view image to obtain an object detection result.
S305, determining the view adjusting condition met by the current frame motion and the observation condition according to the object detection result and the maximum view gear.
S306, acquiring visual field adjusting conditions which are respectively met by continuous N-1 frames before the current frame.
S307, according to the visual field adjusting conditions respectively met by the continuous N frames, determining the visual field adjusting condition with the highest meeting frequency.
And S308, judging whether the switching condition is met according to the visual field adjusting condition with the highest meeting frequency.
That is, it is determined whether the proportion of the visual field adjustment condition with the highest number of times in each visual field adjustment condition is greater than or equal to a preset proportion threshold, if yes, the switching condition is satisfied, and S309 is executed; if not, the switching condition is not satisfied, and the process returns to S301 to continue the next frame processing.
S309, determining a target visual field returning gear according to the current visual field gear and the visual field adjusting condition with the highest satisfied frequency.
S310, adjusting the observation view from the current view gear to the target view gear.
Fig. 4 is a schematic flow chart illustrating another method for adjusting an observation field according to an embodiment of the present invention. In the scene shown in fig. 4, a correspondence relationship between the field of view position and the increasing order of the FOV is preset. Specifically, as shown in fig. 4, the method includes:
s401, acquiring a maximum view gear and a current view gear corresponding to the current vehicle speed and an object detection result of the current view image.
The specific implementation manner is as described above, and is not described herein again.
S402, judging whether the current view gear is smaller than the maximum view gear; if yes, executing S403; if not, go to S411.
S403, judging whether an object of interest exists in the object detection result; if yes, go to S404; if not, go to S405.
S404, judging whether the size of the area where the largest interesting object is located is smaller than a preset safety size; if yes, go to S405; if not, go to step S406.
S405, judging whether the current view gear is larger than the maximum view gear; if yes, go to step S412; if not, go to S413.
S406, judging whether the size of the area where the largest interesting object is projected to the specified view field is larger than a preset safety size; if yes, go to S411; if not, go to S413.
And S411, adding one to the current view gear to obtain a target view gear.
That is, if the field of view expansion condition is satisfied and the FOV is reduced, the shift is increased by one.
And S412, subtracting one from the current view gear to obtain a target view gear.
That is, if the field of view expansion condition is satisfied and the FOV is reduced, the shift is decreased by one.
And S413, determining the current view gear as the target view gear.
And S414, adjusting the observation view from the current view gear to the target view gear.
By the scheme, the observation visual field can be adjusted in real time. Based on the adjustment of the observation visual field, the embodiment of the invention further provides an application scene of the scheme. Referring to the flow shown in fig. 5, the method may further include the following steps:
and S110, utilizing the target view range to cut the initial observation image to obtain a target view image.
And S112, carrying out object detection on the target view image to obtain an object detection result of the target view image.
That is, the target view range of the next frame can be determined at the current frame by adjusting the observation view in real time, and then the view range more conforming to the motion and observation requirements of the mobile device can be adopted for object observation at the next frame, which is beneficial to improving the detection accuracy of the mobile device on the obstacles in the driving process, and further beneficial to improving the safety of the mobile device in the driving process.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present invention. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Example two
Based on the method for adjusting the observation field provided in the first embodiment, the embodiment of the present invention further provides an embodiment of an apparatus for implementing the steps and the method in the first embodiment of the method.
Referring to fig. 6, an adjusting apparatus 600 for observing a visual field according to an embodiment of the present invention includes:
an obtaining module 61, configured to obtain motion information and an initial observation image of the mobile device;
a cutting module 62, configured to perform cutting processing on the initial observation image by using the current view range to obtain a current view image;
a determining module 63, configured to determine a target view range according to the motion information of the mobile device and the current view image;
and an adjusting module 64 for adjusting the observation field of view to the target field of view.
In one possible design, the determining module 63 is specifically configured to:
acquiring a first maximum visual field range corresponding to the motion information of the mobile device;
acquiring an object detection result of the current view image, wherein the object detection result comprises: whether the current view field image contains the interested object and the size of the region where the interested object is located;
and determining the target visual field range according to the first visual field range and the object detection result.
The determining module 63 is specifically configured to:
determining a target adjusting condition according to the first visual field range, the object detection result and a preset visual field adjusting condition;
and determining the target visual field range according to the current visual field range and the target adjusting condition.
The visual field adjustment condition according to the embodiment of the present invention may include: a visual field enlarging condition, a visual field reducing condition, and a visual field maintaining condition.
Wherein the visual field expansion condition includes:
the current field of view is less than the first field of view;
alternatively, the first and second electrodes may be,
the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is projected into the specified visual field range is larger than the preset safety size; wherein the specified viewing range is greater than the current viewing range.
Wherein the visual field reduction condition includes:
the current field of view range is greater than the first field of view range, and no object of interest is detected in the current field of view image;
alternatively, the first and second electrodes may be,
the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is located is smaller than a preset safety size.
Wherein the visual field maintaining condition includes:
the current field of view range is equal to the first field of view range, and no object of interest is detected in the current field of view image;
alternatively, the first and second electrodes may be,
the current visual field range is larger than or equal to the first visual field range, an interested object is detected in the current visual field image, the size of the area where the largest interested object is located is larger than a preset safety size, and the size of the area where the largest interested object is located projected into the specified visual field range is smaller than the preset safety size; wherein the specified horizon is greater than the current horizon;
alternatively, the first and second electrodes may be,
the current visual field range is larger than or equal to the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is located is smaller than or equal to a preset safety size.
Based on the foregoing design, in a possible implementation manner, the determining module 63 is specifically configured to:
and acquiring a visual field adjusting condition met by the current visual field range according to the first visual field range and the object detection result to serve as the target adjusting condition.
In another possible implementation manner, the determining module 63 is specifically configured to:
acquiring a visual field adjusting condition met by the current visual field range according to the first visual field range and the object detection result;
acquiring a visual field adjusting condition met by at least one frame before a current frame;
counting the satisfying times of each visual field adjusting condition;
acquiring the proportion of the visual field adjusting condition which meets the highest frequency in each visual field adjusting condition;
and if the proportion is greater than or equal to a preset proportion threshold value, determining the visual field adjusting condition with the highest number of times as the target adjusting condition.
In another possible design, the adjusting module 64 is specifically configured to:
gradually adjusting the observation visual field from the current visual field range to the target visual field range, or,
and switching the observation visual field from the current visual field range to the target visual field range.
The motion information of the mobile device according to the embodiment of the present invention includes: a travel speed of the mobile device, an acceleration of the mobile device.
In an embodiment of the invention, any field of view range corresponds to a unique field angle FOV.
In addition, in the embodiment of the present invention, the cropping module 62 is further configured to crop the initial observation image using the target view range to obtain a target view image;
the detection module (not shown in fig. 6) is further configured to: and carrying out object detection on the target view image to obtain an object detection result of the target view image.
The device 600 for adjusting the observation field of view shown in fig. 6 can be used to implement the technical solution of the above method embodiment, and the implementation principle and technical effects of the device can be further described with reference to the related description in the method embodiment, and optionally, the device 600 for adjusting the observation field of view can be a processor in a mobile device or a cloud server or a terminal device.
It should be understood that the division of the modules of the device 600 for adjusting the observation field of view shown in fig. 6 is merely a logical division, and the actual implementation may be wholly or partially integrated into a physical entity or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling by the processing element in software, and part of the modules can be realized in the form of hardware. For example, the adjusting module 64 may be a processing element separately installed, or may be integrated into the adjusting apparatus 600 for observing visual field, for example, one of the chips of the terminal, or may be stored in the memory of the adjusting apparatus 600 for observing visual field in the form of a program, and one of the processing elements of the adjusting apparatus 600 for observing visual field calls and executes the functions of the above modules. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling programs. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Further, an embodiment of the present invention provides an adjusting apparatus for an observation field, referring to fig. 7, the adjusting apparatus 600 for an observation field includes:
a memory 610;
a processor 620; and
a computer program;
wherein the computer program is stored in the memory 610 and configured to be executed by the processor 620 to implement the method as described in any of the implementations of the embodiments above.
The number of the processors 620 in the adjustment apparatus 600 for observing a field of view may be one or more, and the processors 620 may also be referred to as processing units, which may implement a certain control function. The processor 620 may be a general purpose processor, a special purpose processor, or the like. In an alternative design, the processor 620 may also store instructions that can be executed by the processor 620 to enable the device 600 for adjusting the observation field of view to perform the method described in the above method embodiment.
In yet another possible design, the device 600 for adjusting the observation field of view may comprise a circuit, which may implement the functions of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the number of the memories 610 in the device 600 for adjusting the observation field of view may be one or more, and the memories 610 have instructions or intermediate data stored thereon, and the instructions may be executed on the processor 620, so that the device 600 for adjusting the observation field of view performs the method described in the above method embodiments. Optionally, other related data may also be stored in the memory 610. Optionally, instructions and/or data may also be stored in processor 620. The processor 620 and the memory 610 may be provided separately or may be integrated together.
In addition, as shown in fig. 7, a transceiver 630 is further disposed in the adjustment apparatus 600 for observing a visual field, wherein the transceiver 630 may be referred to as a transceiver unit, a transceiver circuit, or a transceiver, etc. for performing data transmission or communication with a test device or other terminal devices, which is not described herein again.
As shown in fig. 7, the memory 610, the processor 620, and the transceiver 630 are connected by a bus and communicate.
If the device 600 for adjusting the observation field is used to implement the method corresponding to fig. 2, for example, the transceiver 630 may issue the body under test to each test terminal, and the transceiver 630 may also be used to receive test operation data fed back by each test terminal. And the processor 620 is used to perform corresponding determination or control operations, and optionally, corresponding instructions may also be stored in the memory 610. The specific processing manner of each component can be referred to the related description of the previous embodiment.
Furthermore, the embodiment of the present invention provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method according to any one of the implementation manners of the embodiment.
Also, an embodiment of the present invention provides an observation field adjustment system, please refer to fig. 8, where the observation field adjustment system 800 includes:
an image acquisition device 810 for acquiring an initial observation image;
an adjusting device 600 for observing a field of view, configured to perform the method according to any one of the implementation manners of the embodiment one.
In addition, an embodiment of the present invention provides an automatic driving system, please refer to fig. 9, where the automatic driving system 900 includes:
an image acquisition device 810 for acquiring an initial observation image;
an observation field of view adjustment apparatus 600 for performing the method according to any one of the embodiments;
and an automatic driving control device 910, configured to control the mobile device to automatically drive according to the target view range.
In addition, an embodiment of the present invention provides a mobile device, please refer to fig. 10, where the mobile device 1000 includes:
a mobile device body 1010;
an autopilot system 900.
Since each module in this embodiment can execute the method shown in the first embodiment, reference may be made to the related description of the first embodiment for a part of this embodiment that is not described in detail.
Furthermore, it should be noted that, when used in the embodiments of the present invention, although the terms "first", "second", etc. may be used in the embodiments of the present invention to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, unless the meaning of the description changes, so long as all occurrences of the "first element" are renamed consistently and all occurrences of the "second element" are renamed consistently. The first and second elements are both elements, but may not be the same element.
The terms used in the embodiments of the present invention are used for describing the embodiments only and are not used for limiting the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in embodiments of the present invention is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The above description of the technology may refer to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration embodiments in which the described embodiments may be practiced. These embodiments, while described in sufficient detail to enable those skilled in the art to practice them, are non-limiting; other embodiments may be utilized and changes may be made without departing from the scope of the described embodiments. For example, the order of operations described in a flowchart is non-limiting, and thus the order of two or more operations illustrated in and described in accordance with the flowchart may be altered in accordance with several embodiments. As another example, in several embodiments, one or more operations illustrated in and described with respect to the flowcharts are optional or may be eliminated. Additionally, certain steps or functions may be added to the disclosed embodiments, or two or more steps may be permuted in order. All such variations are considered to be encompassed by the disclosed embodiments and the claims.
Additionally, terminology is used in the foregoing description of the technology to provide a thorough understanding of the described embodiments. However, no unnecessary detail is required to implement the described embodiments. Accordingly, the foregoing description of the embodiments has been presented for purposes of illustration and description. The embodiments presented in the foregoing description and the examples disclosed in accordance with these embodiments are provided solely to add context and aid in the understanding of the described embodiments. The above description is not intended to be exhaustive or to limit the described embodiments to the precise form of the embodiments of the invention. Many modifications, alternative uses, and variations are possible in light of the above teaching. In some instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the described embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (20)

1. An adjustment method for an observation field, characterized in that an automatic driving system for a mobile device comprises:
acquiring motion information and an initial observation image of the mobile device;
cutting the initial observation image by using the current view range to obtain a current view image;
determining a target view range according to the motion information of the mobile device and the current view image;
and adjusting the observation visual field to the target visual field range.
2. The method of claim 1, wherein determining a target field of view based on the motion information of the mobile device and the current field of view image comprises:
acquiring a first view range corresponding to the motion information of the mobile device;
acquiring an object detection result of the current view image, wherein the object detection result comprises: whether the current view field image contains the interested object and the size of the region where the interested object is located;
and determining the target visual field range according to the first visual field range and the object detection result.
3. The method of claim 2, wherein determining the target field of view based on the first field of view and the object detection results comprises:
determining a target adjusting condition according to the first visual field range, the object detection result and a preset visual field adjusting condition;
and determining the target visual field range according to the current visual field range and the target adjusting condition.
4. The method of claim 3, wherein the field of view adjustment condition comprises: a visual field enlarging condition, a visual field reducing condition, and a visual field maintaining condition.
5. The method of claim 4, wherein the field-of-view expansion condition comprises:
the current field of view is less than the first field of view;
alternatively, the first and second electrodes may be,
the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is projected into the specified visual field range is larger than the preset safety size; wherein the specified viewing range is greater than the current viewing range.
6. The method of claim 4, wherein the field of view reduction condition comprises:
the current field of view range is greater than the first field of view range, and no object of interest is detected in the current field of view image;
alternatively, the first and second electrodes may be,
the current visual field range is larger than the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is located is smaller than a preset safety size.
7. The method of claim 4, wherein the field-of-view preservation condition comprises:
the current field of view range is equal to the first field of view range, and no object of interest is detected in the current field of view image;
alternatively, the first and second electrodes may be,
the current visual field range is larger than or equal to the first visual field range, an interested object is detected in the current visual field image, the size of the area where the largest interested object is located is larger than a preset safety size, and the size of the area where the largest interested object is located projected into the specified visual field range is smaller than the preset safety size; wherein the specified horizon is greater than the current horizon;
alternatively, the first and second electrodes may be,
the current visual field range is larger than or equal to the first visual field range, an interested object is detected in the current visual field image, and the size of the area where the largest interested object is located is smaller than or equal to a preset safety size.
8. The method according to any one of claims 3-7, wherein determining a target adjustment condition based on the first field of view, the object detection result and a preset field of view adjustment condition comprises:
and acquiring a visual field adjusting condition met by the current visual field range according to the first visual field range and the object detection result to serve as the target adjusting condition.
9. The method according to any one of claims 3-7, wherein determining a target adjustment condition based on the first field of view, the object detection result and a preset field of view adjustment condition comprises:
acquiring a visual field adjusting condition met by the current visual field range according to the first visual field range and the object detection result;
acquiring a visual field adjusting condition met by at least one frame before a current frame;
counting the satisfying times of each visual field adjusting condition;
acquiring the proportion of the visual field adjusting condition which meets the highest frequency in each visual field adjusting condition;
and if the proportion is greater than or equal to a preset proportion threshold value, determining the visual field adjusting condition with the highest number of times as the target adjusting condition.
10. The method according to any one of claims 1-7, wherein said adjusting the observation field of view to the target field of view comprises:
gradually adjusting the observation visual field from the current visual field range to the target visual field range, or,
and switching the observation visual field from the current visual field range to the target visual field range.
11. The method of any of claims 1-7, wherein the motion information of the mobile device comprises: a travel speed of the mobile device, an acceleration of the mobile device.
12. The method of any of claims 1-7, wherein any field of view range corresponds to a unique field angle FOV.
13. The method according to any one of claims 1-7, further comprising:
cutting the initial observation image by using the target view range to obtain a target view image;
and carrying out object detection on the target view image to obtain an object detection result of the target view image.
14. An adjustment device for viewing fields of view, characterized by an autopilot system for a mobile device, comprising:
a memory for storing a computer program;
a processor;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-13.
15. An adjustment system for a field of view, comprising:
the image acquisition device is used for acquiring an initial observation image;
adjusting device for the observation field of view for carrying out the method according to any one of claims 1 to 13.
16. The system of claim 15, wherein the imaging field of view of the image capture device is 120 DFOV.
17. A computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement the method of any one of claims 1-13.
18. An autopilot system, comprising:
the image acquisition device is used for acquiring an initial observation image;
-means for adjusting the field of view for performing the method according to any one of claims 1 to 13;
and the automatic driving control device is used for controlling the mobile device to automatically drive according to the target visual field range.
19. A mobile device, comprising:
a mobile device main body;
the autopilot system of claim 18.
20. The mobile device of claim 19, comprising at least one of a vehicle, a vessel, or an unmanned aerial vehicle.
CN201980012196.7A 2019-04-30 2019-04-30 Method, device and system for adjusting observation field, storage medium and mobile device Pending CN111712827A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/085261 WO2020220289A1 (en) 2019-04-30 2019-04-30 Method, apparatus and system for adjusting field of view of observation, and storage medium and mobile apparatus

Publications (1)

Publication Number Publication Date
CN111712827A true CN111712827A (en) 2020-09-25

Family

ID=72536811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980012196.7A Pending CN111712827A (en) 2019-04-30 2019-04-30 Method, device and system for adjusting observation field, storage medium and mobile device

Country Status (3)

Country Link
EP (1) EP3817374A1 (en)
CN (1) CN111712827A (en)
WO (1) WO2020220289A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907617B (en) * 2021-01-29 2024-02-20 深圳壹秘科技有限公司 Video processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140214271A1 (en) * 2013-01-31 2014-07-31 Electronics And Telecommunications Research Institute Apparatus and method for detecting obstacle adaptively to vehicle speed
US20140277940A1 (en) * 2013-03-15 2014-09-18 Gentex Corporation Display system and method thereof
CN108765490A (en) * 2018-04-04 2018-11-06 科大讯飞股份有限公司 Panorama view adjusting method and device, storage medium and electronic equipment
CN109389073A (en) * 2018-09-29 2019-02-26 北京工业大学 The method and device of detection pedestrian area is determined by vehicle-mounted camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MY187205A (en) * 2015-10-22 2021-09-10 Nissan Motor Display control method and display control device
CN113163119A (en) * 2017-05-24 2021-07-23 深圳市大疆创新科技有限公司 Shooting control method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140214271A1 (en) * 2013-01-31 2014-07-31 Electronics And Telecommunications Research Institute Apparatus and method for detecting obstacle adaptively to vehicle speed
US20140277940A1 (en) * 2013-03-15 2014-09-18 Gentex Corporation Display system and method thereof
CN108765490A (en) * 2018-04-04 2018-11-06 科大讯飞股份有限公司 Panorama view adjusting method and device, storage medium and electronic equipment
CN109389073A (en) * 2018-09-29 2019-02-26 北京工业大学 The method and device of detection pedestrian area is determined by vehicle-mounted camera

Also Published As

Publication number Publication date
EP3817374A1 (en) 2021-05-05
WO2020220289A1 (en) 2020-11-05

Similar Documents

Publication Publication Date Title
JP7345504B2 (en) Association of LIDAR data and image data
US20240192320A1 (en) Object detection and detection confidence suitable for autonomous driving
EP3362982B1 (en) Systems and methods for producing an image visualization
CN113196007B (en) Camera system applied to vehicle
EP3474111B1 (en) Target tracking method by an unmanned aerial vehicle
EP3792660B1 (en) Method, apparatus and system for measuring distance
JP2019096072A (en) Object detection device, object detection method and program
WO2018120040A1 (en) Obstacle detection method and device
JP2018032402A (en) System for occlusion adjustment for in-vehicle augmented reality systems
WO2013080745A1 (en) Object detection device
EP3979196A1 (en) Image processing method and apparatus for target detection
US20210078597A1 (en) Method and apparatus for determining an orientation of a target object, method and apparatus for controlling intelligent driving control, and device
US20220414917A1 (en) Method and apparatus for obtaining 3d information of vehicle
CN114255443A (en) Monocular positioning method, device, equipment and storage medium for traffic vehicle
CN111712827A (en) Method, device and system for adjusting observation field, storage medium and mobile device
CN116095473A (en) Lens automatic focusing method, device, electronic equipment and computer storage medium
CN111862226B (en) Hardware design for camera calibration and image preprocessing in a vehicle
CN113256709A (en) Target detection method, target detection device, computer equipment and storage medium
CN111260538B (en) Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera
US20210334545A1 (en) Communication Method and Communications Apparatus
WO2021232222A1 (en) Ranging method and apparatus
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN113696896B (en) Road surface information processing method, device, electronic equipment and storage medium
CN114407916B (en) Vehicle control and model training method and device, vehicle, equipment and storage medium
US20240196105A1 (en) Fallback mechanism for auto white balancing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200925