CN116980540A - Low-illumination image processing method and device for pod and panoramic pod system - Google Patents

Low-illumination image processing method and device for pod and panoramic pod system Download PDF

Info

Publication number
CN116980540A
CN116980540A CN202310927806.4A CN202310927806A CN116980540A CN 116980540 A CN116980540 A CN 116980540A CN 202310927806 A CN202310927806 A CN 202310927806A CN 116980540 A CN116980540 A CN 116980540A
Authority
CN
China
Prior art keywords
image
low
panoramic
nacelle
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310927806.4A
Other languages
Chinese (zh)
Inventor
梁立正
王明真
余翔宇
张阳
邢侃侃
李国光
于庆冰
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Space Intelligent Technology Co ltd
Original Assignee
Hubei Space Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Space Intelligent Technology Co ltd filed Critical Hubei Space Intelligent Technology Co ltd
Priority to CN202310927806.4A priority Critical patent/CN116980540A/en
Publication of CN116980540A publication Critical patent/CN116980540A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a low-illumination image processing method and device for a nacelle and a panoramic nacelle system, wherein the method comprises the following steps: images of different angles are acquired through a plurality of cameras carried by the nacelle; judging whether the brightness of the image is lower than a preset threshold value, if so, carrying out low-illumination enhancement processing on the image, otherwise, carrying out super-resolution enhancement processing on the image; identifying target information in the image; splicing images corresponding to all paths of cameras into panoramic images; and cutting out the panoramic image according to the view range of the ground control display equipment and returning the panoramic image. The application carries out enhancement treatment on the low-illumination image, so that the nacelle system can accurately identify the target under the condition of low light level and can adapt to the environment of full time period; the super-resolution enhancement processing is carried out on the image, and the identified target and the positioning thereof are more accurate; the spliced panoramic image is cut in real time, only part of the panoramic image in the view range of the ground control display device is transmitted back, and the instantaneity and the stability of the panoramic pod system are improved.

Description

Low-illumination image processing method and device for pod and panoramic pod system
Technical Field
The application relates to the technical field of visual image processing, in particular to a low-illumination image processing method and device for a nacelle and a panoramic nacelle system.
Background
In the process of carrying out target recognition on images acquired by the nacelle, the accuracy of recognition results is directly affected by the image quality. When the illuminance of the ambient light source is low, the details of the image are unclear, the information is lost or the image quality is seriously reduced, and at the moment, the image quality is required to be improved by using an image enhancement technology so as to facilitate the subsequent image processing and target identification. The low-illumination image enhancement technology aims at solving the problems of low brightness, low contrast, noise and the like of an image in a low-illumination environment, and is used for recovering the brightness of the image and improving the visual effect. With the development of deep learning, image enhancement techniques have also been widely used. However, the existing low-illumination enhancement method only focuses on the restoration of the brightness of the image, but omits the improvement of the image quality, including the enhancement of details and the like. Therefore, when we need to recover the high-quality and high-resolution bright image from the low-illumination image, we need to perform super-resolution enhancement in addition to low-illumination enhancement on the image, enhance the outline of the object in the image, and enhance the resolution of the image while recovering the brightness of the image, so as to enhance the image quality. However, the existing nacelle system has few functions of low illumination enhancement, super resolution enhancement, target recognition and panorama stitching.
In addition, after panoramic stitching is carried out on the acquired images of the nacelle, the size of the whole panoramic image is large, the burden is caused on communication transmission, and the real-time performance is insufficient due to too slow transmission; however, due to the limited size of the ground display device, only a certain angle of the panoramic image can be seen at the same time, and the transmission of the whole panoramic image to the ground display device is an unnecessary waste of communication resources.
Disclosure of Invention
In order to solve the technical problems, the application provides a low-illumination image processing scheme for a nacelle, which realizes the full scene and the multifunction of a nacelle system and meets the general-purpose requirement of the nacelle system when facing complex field conditions or complex demands.
Specifically, the technical scheme of the application is as follows:
according to an aspect of the present application, there is provided a low-illuminance image processing method for a nacelle, including the steps of:
s100: images of different angles are acquired through a plurality of cameras carried by the nacelle;
s200: judging whether the brightness of the image is lower than a preset threshold, if so, executing S300, otherwise, executing S400;
s300: performing low-illumination enhancement processing on the image, and returning to S200;
s400: performing super-resolution enhancement processing on the image;
s500: identifying target information in the image;
s600: splicing images corresponding to all paths of cameras into panoramic images;
s700: and cutting out the panoramic image according to the view range of the ground control display equipment and returning the panoramic image.
Further, in the step S300, the low-illumination enhancement processing is to enhance the image with the brightness lower than the preset threshold by using the SMG-LLIE model.
Further, in the step S400, the super-resolution enhancement processing is to enhance the target edge in the image by using the EDSR depth network model.
Further, the identifying the target information in the image in S500 specifically includes: and respectively carrying out target recognition processing on each image by adopting a depth network model YOLOV8, and determining the position coordinates and the recognition frame body of the target in the image.
Further, the stitching the images corresponding to the cameras in S600 into the panoramic image specifically includes: mapping all pixel points in the images corresponding to each path of cameras from the plane pixel coordinates to the hemispherical longitude and latitude coordinates to obtain a panoramic stitching image.
Further, the step S700 of cropping the panoramic image according to the view range of the ground control display device and returning specifically includes:
and reading the panoramic image range to be checked by the ground control display equipment, calculating to obtain a visual angle, cutting the spliced panoramic image in real time according to the visual angle, and returning.
According to another aspect of the present application, there is provided a low-illuminance image processing apparatus for a nacelle, comprising:
the image acquisition module is used for acquiring multiple paths of images acquired by the pod camera;
the brightness comparison module is used for judging whether the brightness of the image is lower than a preset threshold value;
the low-illumination enhancement module is used for carrying out low-illumination enhancement processing on the image with the brightness lower than a preset threshold value;
the super-resolution enhancement module is used for performing super-resolution enhancement processing on the image;
the target identification module is used for identifying target information in the image;
the panoramic stitching module is used for stitching images corresponding to all paths of cameras into panoramic images;
and the image clipping module is used for clipping the panoramic image according to the viewing range of the ground control display equipment and returning the panoramic image.
According to a third aspect of the present application there is provided a panoramic pod system comprising a multi-way camera, a low-light image processing device for pods as described above.
According to a fourth aspect of the present application there is provided an electronic device comprising a memory and a processor, the memory having stored thereon a computer program loaded and executed by the processor to implement a low-light image processing method for a nacelle as described above.
According to a fifth aspect of the present application, there is provided a computer readable storage medium storing a computer program for implementing the low-light image processing method for a nacelle as described above when executed by a processor.
Compared with the prior art, the application has at least one of the following beneficial effects:
1. the low-illumination image processing method of the application enhances the low-illumination image, so that the nacelle system can accurately identify the target under the low-light condition at night, and the method can adapt to the environment of the whole time period.
2. The low-illumination image processing method adopts the depth network model to carry out super-resolution enhancement processing on the image, and the nacelle system applying the image processing method can obtain more satisfactory recognition effect when facing the recognition task of a far target, and the recognized target and the positioning thereof are more accurate.
3. The application ensures the ultra-large view finding range of the panoramic pod system by adopting the multi-path cameras and the panoramic stitching technology, and can meet the application requirements of most scenes.
4. According to the application, through cutting spliced panoramic images in real time, only partial panoramic images in the view range of the ground control display device are transmitted back, so that the burden on communication transmission caused by overlarge size of the whole panoramic image is avoided, the requirement on network bandwidth is reduced, the instantaneity and stability of the panoramic pod system are improved, and the system can be applied to more severe application environments.
5. The panoramic pod system has various functions, large, accurate and remote sensing and recognition range and high real-time property of returned data, can meet the requirements of more complex field conditions or complex demand tasks, has high system integration degree, and avoids complex line configuration steps.
Drawings
The above features, technical features, advantages and implementation of the present application will be further described in the following description of preferred embodiments with reference to the accompanying drawings in a clear and easily understood manner.
Fig. 1 is a flowchart of a low-light image processing method for a nacelle according to an embodiment of the application.
Detailed Description
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will explain the specific embodiments of the present application with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the application, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
For simplicity of the drawing, only the parts relevant to the application are schematically shown in each drawing, and they do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In this context, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, unless explicitly stated or limited otherwise; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
The application provides a low-illumination image processing implementation method for a nacelle, which comprises the steps of collecting image data through multiple cameras; judging whether the brightness is lower than a certain threshold value or not, and carrying out enhancement processing on the image with lower brightness by using a low-illumination enhancement depth network; inputting the image with enhanced brightness into a super-resolution enhanced depth network, and enhancing the outline of a target in the image; inputting the image with enhanced outline and brightness into an identification depth network to obtain the position coordinates of the target in the image and an identification frame; mapping the processed planar image to hemispherical longitude and latitude coordinates according to preset key points to obtain a spliced panoramic image; and finally, reading the panoramic image range to be checked by the ground control display device through the autonomous network, calculating to obtain a visual angle, cutting out the panoramic image, and transmitting the panoramic image back to the ground control display device through the autonomous network.
In one embodiment, referring to fig. 1 of the specification, the application provides a low-illumination image processing method for a nacelle, which comprises the following steps:
step one, acquiring images of different angles through a plurality of cameras carried by the nacelle.
Image data of a plurality of different angles in a hemispherical range are acquired through a plurality of cameras, and seven paths of cameras or nine paths of cameras can be adopted. Nine paths of cameras are adopted in the embodiment, and the setting mode is as follows: one camera is arranged at the top of the hemisphere, four cameras are arranged at the middle section of the hemisphere, and four cameras are arranged at the horizontal position of the hemisphere; the goal is to acquire image data for all angles within the hemisphere. And the nine paths of cameras are respectively connected with the edge computing unit by using the USB interface, and the image data acquired by the nine paths of cameras are transmitted to the edge computing unit to wait for the next processing.
Judging whether the brightness of the image is lower than a preset threshold value, if yes, performing low-illumination enhancement processing on the image, otherwise, entering the next step.
And judging whether the brightness of the image data transmitted by each path of cameras is lower than a preset threshold value or not through preset logic carried in the edge calculation unit, wherein the preset threshold value is determined according to experimental tests. If the brightness of the image data transmitted by a certain path of cameras is lower than a preset threshold value, the image data is input into a low-illumination enhancement depth network for enhancement processing. The low-light enhancement network used in this embodiment is a supervised learning low-light enhancement (SMG-LLIE) algorithm. The following takes the SMG-LLIE algorithm as an example to briefly describe the specific steps of the low-light enhancement process:
(1) Establishing a data set: we can use either a public low-light dataset (such as LOL, SID, etc.) or a custom low-light dataset that is collected by themselves. Here we train and validate the SMG-LLIE model with a custom low-light dataset that we collect themselves.
(2) Model training: the SMG-LLIE model is trained using a custom low-light data set. In the training process, a pre-training model can be adopted for transfer learning so as to accelerate the training process and improve the model performance.
(3) Model verification and optimization: and evaluating the performance of the model on the verification data set, and calculating indexes such as errors. The model may be optimized, if desired, for example by adjusting the network architecture, loss functions or optimizers, etc.
(4) Deployment: and deploying the trained SMG-LLIE model on the edge computing unit.
(5) After deployment is completed, image data of each path of cameras are read in real time, and then the SMG-LLIE model is used for carrying out low-illumination enhancement processing on the image data with brightness lower than a preset threshold value.
And thirdly, performing super-resolution enhancement processing on the image.
After the low-illumination enhancement processing, when the brightness of the image data corresponding to each path of cameras reaches a preset brightness threshold value, each path of image data is respectively input into a super-resolution enhancement depth network to be enhanced, the edge information of a target in the image is enhanced, and the subsequent target identification operation is facilitated.
The super-resolution enhancement network used here is similar to the low-light enhancement network, except that the training data set and the specific network architecture are different, and the task that can be finally completed is different. The super-resolution enhancement network used in this embodiment is an EDSR algorithm, and the training data set is DIV2K.
And step four, identifying target information in the image.
After super-resolution enhancement processing, target edge information in image data corresponding to all cameras is enhanced, then the image data corresponding to each path of cameras are respectively input into a target recognition depth network for target recognition processing, and target information in images corresponding to each path of cameras is output, wherein the target information comprises position coordinates of target personnel or target objects and a recognition frame body.
The target recognition depth network used here is also similar to the low-light intensity enhancement network, except that the training data set and the specific network architecture are different, and the final task to be completed is different. The target recognition depth network used in this embodiment is YOLOV8; the training data set is a self-defined data set, and the main recognition targets are water falling personnel and unmanned aerial vehicles.
And fifthly, splicing the images corresponding to the cameras to form a panoramic image.
After the target identification task of the image is completed, presetting key points in the plane images corresponding to each path of cameras according to the requirement, and determining hemispherical longitude and latitude coordinates of the key points by matching the plane pixel coordinates of the key points in the images corresponding to each path of cameras; determining a mapping rule of each path of camera corresponding image and a hemispherical panoramic image according to plane pixel coordinates and hemispherical longitude and latitude coordinates of key points in each path of camera corresponding image, and mapping other pixel points in the path of camera corresponding image on the hemispherical according to the mapping rule one by one; and mapping all pixel points in the images corresponding to each path of cameras from the plane pixel coordinates to the hemispherical longitude and latitude coordinates to finish panoramic image stitching, and finally outputting a hemispherical panoramic image for calling and viewing by the ground control display equipment.
And step six, cutting out the panoramic image according to the view range of the ground control display equipment and returning the panoramic image.
Because the ground display equipment is limited in size and can only see a certain angle of the panoramic image at the same time, when the panoramic image is transmitted to the ground, firstly, the panoramic image range required to be called and checked by the ground control display equipment is read, the visual angle is calculated according to the panoramic image range, then, the spliced panoramic image is cut and checked according to the visual angle in real time, only part of the panoramic image in the viewing range of the ground control display equipment is transmitted back, the burden of the whole panoramic image on communication transmission due to overlarge size is avoided, unnecessary waste of communication resources is also avoided, and the instantaneity and stability of the whole nacelle system are improved.
In one embodiment, the present application provides a low-light image processing apparatus for a nacelle, comprising:
the image acquisition module is connected with the multipath cameras carried by the nacelle and is used for acquiring multipath images acquired by the nacelle cameras;
the brightness comparison module is used for judging whether the brightness of the image is lower than a preset threshold value;
the low-illumination enhancement module is used for carrying out low-illumination enhancement processing on the image with the brightness lower than a preset threshold value;
the super-resolution enhancement module is used for performing super-resolution enhancement processing on the image;
the target identification module is used for identifying target information in the image;
the panoramic stitching module is used for stitching images corresponding to all paths of cameras into panoramic images;
and the image cutting module is in communication connection with the ground control display equipment through the autonomous network equipment and is used for cutting the panoramic image according to the viewing range of the ground control display equipment and returning the panoramic image.
In one embodiment, the panoramic pod system provided by the application comprises multiple paths of cameras and the low-illumination image processing device for the pod, wherein each path of camera is respectively connected with the low-illumination image processing device through a USB interface, the low-illumination image processing device is in communication connection with ground control display equipment through autonomous network equipment, the low-illumination image processing device is deployed on an edge computing unit, and the embodiment adopts an NVIDIA Jetson edge computing unit.
In one embodiment, the application provides an electronic device comprising a memory and a processor, the memory having stored thereon a computer program loaded and executed by the processor to implement a low-light image processing method for a pod as described above, the processor being a CPU, controller, microcontroller, microprocessor, or other data processing chip.
In one embodiment, the application provides a computer readable storage medium storing a computer program for implementing a low-light image processing method for a nacelle as described above when executed by a processor. The aspects of the present application, in essence or contributing to the prior art or portions of the aspects, may be embodied in the form of a software product stored in a storage medium, comprising instructions for causing a computer device (which may be a personal computer, a server, a programmed computer or a network device, etc.) to perform all or part of the steps of the methods described in the various method embodiments of the present application. The computer readable storage medium includes a usb disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), and other various media capable of carrying computer program code.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the parts of a certain embodiment that are not described or depicted in detail may be referred to in the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
In the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A low-light image processing method for a nacelle, comprising the steps of:
s100: images of different angles are acquired through a plurality of cameras carried by the nacelle;
s200: judging whether the brightness of the image is lower than a preset threshold, if so, executing S300, otherwise, executing S400;
s300: performing low-illumination enhancement processing on the image, and returning to S200;
s400: performing super-resolution enhancement processing on the image;
s500: identifying target information in the image;
s600: splicing images corresponding to all paths of cameras into panoramic images;
s700: and cutting out the panoramic image according to the view range of the ground control display equipment and returning the panoramic image.
2. The low-light image processing method for pod according to claim 1, wherein the low-light enhancement processing of the image in S300 is enhancement processing of the image with brightness lower than a preset threshold using SMG-LLIE model.
3. The method for processing the low-light image of the nacelle according to claim 1, wherein the super-resolution enhancement processing of the image in S400 is enhancement processing of the target edge in the image using an EDSR depth network model.
4. The low-light image processing method for a nacelle according to claim 1, wherein the identifying the target information in the image at S500 specifically comprises: and respectively carrying out target recognition processing on each image by adopting a depth network model YOLOV8, and determining the position coordinates and the recognition frame body of the target in the image.
5. The method for processing the low-illumination image of the nacelle according to claim 1, wherein the stitching the images corresponding to the cameras of S600 into the panoramic image specifically comprises: mapping all pixel points in the images corresponding to each path of cameras from the plane pixel coordinates to the hemispherical longitude and latitude coordinates to obtain a panoramic stitching image.
6. The low-light image processing method for a nacelle according to claim 1, wherein S700 comprises:
and reading the panoramic image range to be checked by the ground control display equipment, calculating to obtain a visual angle, cutting the spliced panoramic image in real time according to the visual angle, and returning.
7. A low-light image processing apparatus for a nacelle, comprising:
the image acquisition module is used for acquiring multiple paths of images acquired by the pod camera;
the brightness comparison module is used for judging whether the brightness of the image is lower than a preset threshold value;
the low-illumination enhancement module is used for carrying out low-illumination enhancement processing on the image with the brightness lower than a preset threshold value;
the super-resolution enhancement module is used for performing super-resolution enhancement processing on the image;
the target identification module is used for identifying target information in the image;
the panoramic stitching module is used for stitching images corresponding to all paths of cameras into panoramic images;
and the image clipping module is used for clipping the panoramic image according to the viewing range of the ground control display equipment and returning the panoramic image.
8. A panoramic pod system comprising multiple cameras, the low-intensity image processing device for pods of claim 7.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that is loaded and executed by the processor to implement the low-light image processing method for a pod of any of claims 1-6.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for implementing the low-illuminance image processing method for a nacelle according to any one of claims 1 to 6 when being executed by a processor.
CN202310927806.4A 2023-07-27 2023-07-27 Low-illumination image processing method and device for pod and panoramic pod system Pending CN116980540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310927806.4A CN116980540A (en) 2023-07-27 2023-07-27 Low-illumination image processing method and device for pod and panoramic pod system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310927806.4A CN116980540A (en) 2023-07-27 2023-07-27 Low-illumination image processing method and device for pod and panoramic pod system

Publications (1)

Publication Number Publication Date
CN116980540A true CN116980540A (en) 2023-10-31

Family

ID=88470789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310927806.4A Pending CN116980540A (en) 2023-07-27 2023-07-27 Low-illumination image processing method and device for pod and panoramic pod system

Country Status (1)

Country Link
CN (1) CN116980540A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985266A (en) * 2004-07-26 2007-06-20 奥普提克斯晶硅有限公司 Panoramic vision system and method
JP2014150352A (en) * 2013-01-31 2014-08-21 Nippon Telegr & Teleph Corp <Ntt> Panorama video information reproduction method, panorama video information reproduction system and program
CN113079325A (en) * 2021-03-18 2021-07-06 北京拙河科技有限公司 Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions
CN115272441A (en) * 2022-06-14 2022-11-01 浙江未来技术研究院(嘉兴) Unstructured high-resolution panoramic depth solving method and sensing device
CN115278068A (en) * 2022-07-20 2022-11-01 重庆长安汽车股份有限公司 Weak light enhancement method and device for vehicle-mounted 360-degree panoramic image system
CN116051428A (en) * 2023-03-31 2023-05-02 南京大学 Deep learning-based combined denoising and superdivision low-illumination image enhancement method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1985266A (en) * 2004-07-26 2007-06-20 奥普提克斯晶硅有限公司 Panoramic vision system and method
JP2014150352A (en) * 2013-01-31 2014-08-21 Nippon Telegr & Teleph Corp <Ntt> Panorama video information reproduction method, panorama video information reproduction system and program
CN113079325A (en) * 2021-03-18 2021-07-06 北京拙河科技有限公司 Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions
CN115272441A (en) * 2022-06-14 2022-11-01 浙江未来技术研究院(嘉兴) Unstructured high-resolution panoramic depth solving method and sensing device
CN115278068A (en) * 2022-07-20 2022-11-01 重庆长安汽车股份有限公司 Weak light enhancement method and device for vehicle-mounted 360-degree panoramic image system
CN116051428A (en) * 2023-03-31 2023-05-02 南京大学 Deep learning-based combined denoising and superdivision low-illumination image enhancement method

Similar Documents

Publication Publication Date Title
EP3757890A1 (en) Method and device for image processing, method and device for training object detection model
US9870609B2 (en) System and method for assessing usability of captured images
US8305452B2 (en) Remote determination of image-acquisition settings and opportunities
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111027504A (en) Face key point detection method, device, equipment and storage medium
JP2022509034A (en) Bright spot removal using a neural network
KR20170010315A (en) Systems and methods for haziness detection
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN112703532B (en) Image processing method, device, equipment and storage medium
KR101553589B1 (en) Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system
CN115965934A (en) Parking space detection method and device
WO2012063544A1 (en) Image processing device, image processing method, and recording medium
Varjo et al. Image based visibility estimation during day and night
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN111738043A (en) Pedestrian re-identification method and device
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN116980540A (en) Low-illumination image processing method and device for pod and panoramic pod system
CN111611835A (en) Ship detection method and device
CN116246200A (en) Screen display information candid photographing detection method and system based on visual identification
CN115393423A (en) Target detection method and device
US11205064B1 (en) Measuring quality of depth images in real time
CN109934045B (en) Pedestrian detection method and device
CN110738225B (en) Image recognition method and device
CN111899287A (en) Ghost high dynamic range image fusion method for automatic driving
US20190286918A1 (en) Method and device for aiding the navigation of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination