CN118301286A - Image display method, terminal and computer readable storage medium - Google Patents

Image display method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN118301286A
CN118301286A CN202410318200.5A CN202410318200A CN118301286A CN 118301286 A CN118301286 A CN 118301286A CN 202410318200 A CN202410318200 A CN 202410318200A CN 118301286 A CN118301286 A CN 118301286A
Authority
CN
China
Prior art keywords
image
panoramic
information
target
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410318200.5A
Other languages
Chinese (zh)
Inventor
崔婵婕
李乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202410318200.5A priority Critical patent/CN118301286A/en
Publication of CN118301286A publication Critical patent/CN118301286A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides an image display method, a terminal and a computer readable storage medium, wherein the image display method is suitable for a server, the server is in communication connection with display equipment, the display equipment comprises a display device, and the display device is used for carrying out target detection on an acquired panoramic image of a target area to obtain detection information corresponding to the panoramic image; determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information; determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point; and sending the target image to a display device for rendering and displaying. According to the application, the target image is sent to the display device for rendering and displaying, so that the monitoring space can be checked in all directions, the target detection result can be checked, and the experience impression is improved.

Description

Image display method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to an image display method, a terminal, and a computer readable storage medium.
Background
The monitoring system is one of the most applied systems in the security system, and video monitoring is the main stream of security. Video monitoring has received more and more attention from people due to the characteristics of vivid image, rich content and the like, and video technology has also been rapidly developed. Monitoring cameras are of numerous types, such as fixedly mounted cameras, rotatably zoomable dome cameras, 360 panoramic cameras, fisheye cameras, and the like. With the development of the deep learning algorithm, in the field of security monitoring, the deep learning algorithm such as target detection, target tracking, semantic segmentation, change detection and the like is widely applied to various scenes such as intrusion target detection, illegal building detection, traffic violation detection and the like. The performance of the algorithms is closely related to the computing capacity of the display card, and the front-end monitoring camera and the monitoring display device belong to lightweight devices, so that the configured display card is generally insufficient in computing capacity.
Disclosure of Invention
The invention mainly solves the technical problems of providing an image display method, a terminal and a computer readable storage medium, and solves the problem that in the prior art, a front-end monitoring camera and monitoring display equipment can not process images while checking a monitoring space in an all-dimensional manner.
In order to solve the technical problems, the first technical scheme adopted by the invention is as follows: there is provided an image display method, the image display method being adapted to a server, the server being in communication with a display device, the display device comprising a display means, the image display method comprising:
Performing target detection on the panoramic image of the obtained target area to obtain detection information corresponding to the panoramic image;
determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information;
Determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point;
and sending the target image to a display device for rendering and displaying.
Wherein determining three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model based on the acquired current field-of-view information of the display device and the sphere panoramic model of the target area, comprises:
Determining a ray equation corresponding to the current visual field information based on the current visual field information of the display device; the current visual field information of the display device comprises visual field angle information, position information and orientation information corresponding to left eyes and right eyes respectively;
Determining a spherical equation of the spherical panoramic model based on attribute information of the spherical panoramic model; the attribute information of the sphere panoramic model comprises sphere radius;
three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model is determined based on the ray equation and the spherical equation.
Wherein determining three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model based on the ray equation and the spherical equation, comprises:
determining three-dimensional coordinate information of a candidate intersection point of a sight line vector of the display device on the surface of the spherical panoramic model based on the ray equation and the spherical equation;
determining horizontal angle information and vertical angle information corresponding to each candidate intersection point based on three-dimensional coordinate information of the candidate intersection point on the surface of the spherical panoramic model;
And selecting a candidate intersection point with the horizontal angle information in the horizontal display view angle range and the vertical angle information in the vertical display view angle range as an intersection point.
Wherein determining and rendering a target image including the intersection based on three-dimensional coordinate information of the intersection includes:
determining pixel information of the intersection in the panoramic image based on the three-dimensional coordinate information of the intersection;
Determining control parameters for displaying a target image containing the cross point based on pixel information of the cross point in the panoramic image;
And adjusting the target acquisition device based on the control parameters, and acquiring a target image displayed by the target acquisition device.
Wherein the pixel information includes pixel point coordinates;
determining pixel information of the intersection in the panoramic image based on the three-dimensional coordinate information of the intersection, comprising:
Determining horizontal angle information and vertical angle information corresponding to the intersection points based on three-dimensional coordinate information of the intersection points on the surface of the spherical panoramic model;
and determining pixel point coordinates of the intersection point on the panoramic image of the target area based on the horizontal angle information and the vertical angle information corresponding to the intersection point, the horizontal display view angle range and the vertical display view angle range and the resolution of the panoramic image.
Wherein determining control parameters for displaying the target image including the cross point based on pixel information of the cross point in the panoramic image includes:
Searching in a preset relation table based on pixel point coordinates of the intersection points on the panoramic image; the preset relation table comprises a plurality of preset coordinates, and each preset coordinate is provided with a corresponding preset operation parameter;
Taking a preset operation parameter corresponding to a preset coordinate matched with the pixel point coordinate as a control parameter of a target acquisition device for acquiring a target image, wherein the control parameter comprises a yaw angle and a pitch angle; the target image is an enlarged image of a local area in the panoramic image with the intersection as a center point.
Wherein the display device further comprises a handle;
adjusting the target acquisition device based on the control parameter, and acquiring a target image displayed by the target acquisition device, comprising:
the handle is used for adjusting the rotation of the target acquisition device based on the control parameters, and a target image is displayed;
and (3) performing magnification adjustment and/or focusing on the target image through the handle.
In order to solve the technical problems, a second technical scheme adopted by the invention is as follows: there is provided an image display method, the image display method being adapted to a display device, the display device being communicatively connected to a server, the display device including a display means, the image display method comprising:
receiving and rendering the target image sent by the server;
Displaying the rendered target image and the spherical panoramic model; the target image is acquired by the image display method described above.
The image display method further comprises the following steps:
receiving a panoramic image and detection information corresponding to the panoramic image, which are sent by a server;
And carrying out three-dimensional rendering on the panoramic image with the detection information to obtain a spherical panoramic model.
The method for three-dimensionally rendering the panoramic image with the detection information to obtain a spherical panoramic model comprises the following steps:
Pre-constructing an initial sphere model, wherein the surface of the initial sphere model is provided with a plurality of vertex data and texture data corresponding to the vertex data;
A sphere panoramic model is generated based on the vertex data, texture data, and the panoramic image with the detection information on the initial sphere model.
Wherein, carry out three-dimensional rendering to the panoramic image that has detection information, obtain spheroid panoramic model, still include:
Determining a horizontal display view angle range and a vertical display view angle range corresponding to the spherical panoramic model based on current visual field information of a panoramic acquisition device for acquiring panoramic images of a target area;
and displaying the images of the surface of the spherical panoramic model in the horizontal display view angle range and the vertical display view angle range.
In order to solve the technical problems, a third technical scheme adopted by the invention is as follows: there is provided an image display apparatus including a server in communication with a display device including a display apparatus, the image display apparatus including:
The detection module is used for carrying out target detection on the panoramic image of the acquired target area to obtain detection information corresponding to the panoramic image;
The analysis module is used for determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information;
A determining module for determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point;
And the sending module is used for sending the target image to the display device for rendering and displaying.
In order to solve the technical problems, a fourth technical scheme adopted by the invention is as follows: there is provided an image display apparatus including a display device communicatively connected to a server, the display device including the display apparatus, the image display apparatus including:
the rendering module is used for receiving the target image sent by the server and rendering the target image;
the display module is used for displaying the rendered target image and the spherical panoramic model; the target image is acquired by the image display method described above.
In order to solve the technical problems, a fifth technical scheme adopted by the invention is as follows: there is provided a terminal comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor being adapted to execute program data to carry out the steps of the image display method as described above.
In order to solve the technical problems, a sixth technical scheme adopted by the invention is as follows: there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps in the image display method as described above.
The beneficial effects of the application are as follows: different from the prior art, an image display method, a terminal and a computer readable storage medium are provided, the image display method is suitable for a server, the server is in communication connection with a display device, the display device comprises a display device, and the image display method comprises: performing target detection on the panoramic image of the obtained target area to obtain detection information corresponding to the panoramic image; determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information; determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point; and sending the target image to a display device for rendering and displaying. According to the method, the server is in communication connection with the display equipment, after the obtained panoramic image is subjected to target detection through the server, three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model is determined based on current visual field information of the display device and the spherical panoramic model of the target area, the target image is further determined, and the target image is sent to the display device for rendering and displaying, so that the target detection result can be checked while the monitoring space is checked in all directions, and experience and appearance are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an embodiment of an image display method according to the present invention;
FIG. 2 is a flowchart illustrating a step S2 of the image display method of FIG. 1 according to an embodiment;
FIG. 3 is a flowchart illustrating an embodiment of step S23 in the image display method of FIG. 2;
FIG. 4 is a flowchart illustrating a step S3 of the image display method of FIG. 1 according to an embodiment;
FIG. 5 is a flowchart illustrating an embodiment of step S31 in the image display method of FIG. 4;
FIG. 6 is a flowchart illustrating an embodiment of step S32 in the image display method of FIG. 4;
FIG. 7 is a flowchart of another embodiment of an image display method according to the present invention;
FIG. 8 is a flowchart of a method for obtaining a spherical panorama model according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a step S502 of the method for obtaining a spherical panorama model provided in FIG. 8 according to an embodiment;
FIG. 10 is a schematic diagram of a frame of an embodiment of an image display device according to the present invention;
FIG. 11 is a schematic diagram of a frame of another embodiment of an image display device provided by the present invention;
FIG. 12 is a schematic diagram of a frame of an embodiment of a terminal provided by the present invention;
FIG. 13 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
In order to enable those skilled in the art to better understand the technical scheme of the present invention, a method for displaying an image provided by the present invention is described in further detail below with reference to the accompanying drawings and the detailed description.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
With the rapid development of computer, network, image processing and transmission technologies, virtual Reality (VR) technology is becoming mature, and the application field is expanding. However, no monitoring device combined with a virtual reality technology exists at present, so that the application provides an image display method for combining VR (virtual reality) head-mounted equipment and security equipment to view real scene monitoring in real time.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of an image display method according to the present invention.
In this embodiment, an image display method is provided, where the image display method is applicable to a server, and the server is communicatively connected to a display device, and the display device includes a display apparatus, and the image display method includes the following steps.
The image display method provided by the embodiment can be suitable for monitoring the field space, and the method is executed by a server. The server in this embodiment at least includes a computing graphics card.
The image display method provided by the embodiment of the application is described below by taking a server as an execution subject.
S1: and carrying out target detection on the panoramic image of the acquired target area to obtain detection information corresponding to the panoramic image.
S2: determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is rendered based on the panoramic image with the detection information.
S3: and determining a target image containing the cross point based on the three-dimensional coordinate information of the cross point and rendering.
S4: and sending the rendered target image and the spherical panoramic model to a display device for display.
By the image display method provided by the embodiment, a user can watch a series of images acquired by a remote monitoring panoramic camera, a dome camera, a fisheye camera and the like in a display device in real time at the same time, can superimpose the detection result of the panoramic image in the display device, and can control all directions of a target area in real time through the display device. The embodiment is suitable for rendering display of any front-end display equipment, is particularly suitable for scenes in which panoramic cameras and dome cameras are installed at the same time, and is not limited in the number of the dome cameras. The image display method of the embodiment is suitable for watching large-scale games, has great advantages for real-time monitoring in numerous scenes of personnel-intensive and monitoring equipment such as railway stations, airports and malls, and meanwhile, the intelligent algorithm module of the service end is beneficial to users to quickly positioning key areas, so that the monitoring efficiency is greatly improved.
In one embodiment, an image display system is provided, the image display system including a camera device, a server, and a display device, the camera device and the display device being communicatively connected to the server to enable information interaction. The camera device comprises a panorama acquisition device and a target acquisition device, and the display device comprises a display device and a handle. Wherein the number of the handles is two.
The server has two functions, namely, is responsible for connecting camera equipment to acquire a real-time pixel stream, decodes the pixel stream acquired in real time into an image, then runs a deep learning algorithm to detect the image to obtain a corresponding algorithm result, recodes the decoded image and the algorithm result to push a stream, and is responsible for receiving state information of a display device and a handle sent by the display equipment, calibrating and then sending a control command of a target acquisition device to the target acquisition device.
The display equipment has two functions, on one hand, the display equipment is responsible for receiving the pixel stream pushed by the server, decoding the pixel stream to obtain an image and an algorithm result, and rendering the image and the algorithm result in real time; on the other hand, the pose of the VR head display and the state of the handle are required to be transmitted to the server in real time through socket communication.
The camera equipment has two functions, namely, on one hand, for acquiring real-time images, transmitting real-time pixel streams to a server through an interface; another aspect is receiving a server command to maneuver the target acquisition device to change pose.
In this embodiment, a socket is used to establish a server, and the display device and the camera device are respectively bound with the server, so that communication connection between the display device and the server and communication connection between the server and the camera device are further realized.
In this embodiment, the display device is a VR device. The interaction with the camera device may use Net SDK and playsdk etc. data development interfaces, the adopted display device uses PICO Neo3, and the interaction with the display device uses display device standard development interface OpenXR SDK. The camera equipment Login function is mainly realized through a client_Login WITH HIGH LEVEL Security interface in the Net SDK, an IP address, a port number, a user name and a password of display equipment are required to be input in Login, and a Login ID is returned after the Login is successful. Devices with the same IP address only need to log in once, and return respective video streams from different channel numbers.
And defining a structure body Message for information transmission according to the types of information transmitted by the VR device and the server, wherein the message_type represents the information transmission type and mainly comprises keys, a rocker state and a head display view, wherein the rocker state corresponds to XrVector f type values and consists of x and y components, and the head display view corresponds to XrQuaternionf and XrVector f and respectively defines the directions and positions of left eyes and right eyes.
In this embodiment, the panorama acquisition device may be a camera of a gun, and the target acquisition device may be a camera of a dome camera. The panoramic acquisition device and the target acquisition device are connected in a server communication mode, so that the server can receive images acquired by the panoramic acquisition device and the target acquisition device respectively and process the images based on a preset algorithm loaded in the server. The handle is in communication connection with the target acquisition device so that the handle can control the size and the center position of the display image in the target acquisition device.
Specifically, in step S1, the panoramic image of the obtained target area is subjected to target detection, and specific embodiments of obtaining detection information corresponding to the panoramic image are as follows.
And acquiring the panoramic image of the target area through a panoramic acquisition device, and sending the acquired panoramic image to a server by the panoramic acquisition device.
In a specific embodiment, real-time video data acquisition is performed on the spatial position of the target area through the panorama acquisition device to obtain a panorama image of the target area. Wherein, panorama acquisition device can be 360 degrees panoramic camera. The panorama acquisition device transmits real-time video data to the server through a PlaySDK video playing function. And setting a decoding callback function and a decoding mode when the server receives the panoramic image transmitted by the panoramic acquisition device. Since the server has a graphics card, each frame of panoramic image in the video data can be decoded by the hardware of the server, and the output panoramic image is in YUV420 format. The resolution of the panoramic image is (Width, height).
In an embodiment, the pictures of the target area are acquired by the image acquisition devices installed at different positions of the target area and sent to the server. Each frame of picture comprises at least one marker, each marker is provided with a plumb line, the images are spliced according to the plumb lines arranged on the markers, and when the images are spliced, each picture can be spliced according to the plumb lines to obtain a panoramic image of the target area. The adoption of the plumb line can reduce errors, is used for shooting the spliced images, is only limited by the example, and selects a corresponding video acquisition device and shooting angle in practical application.
After the panoramic image is received by the server, detecting the panoramic image based on a preset algorithm loaded on the server, and obtaining detection information corresponding to the panoramic image. The preset algorithm may be a semantic segmentation algorithm, a target detection algorithm, etc., and the detection information may be a detection frame containing a preset target.
The server encodes and packages the panoramic image and the detection information in an encoded frame of the panoramic image using ffmpeg. The information format sent by the server to the display device is rtsp. Wherein, the coding mode is hard coding.
The server sends the coded frames of the panoramic image to the display device, and the display device decodes the coded frames of the panoramic image to obtain the panoramic image and detection information corresponding to the panoramic image.
The display device performs three-dimensional rendering based on the detection information of the panoramic image to obtain a spherical panoramic model.
The server provided in the embodiment reduces the computational power dependence of the whole scheme on the front-end camera equipment and the VR equipment while increasing the monitoring force. Meanwhile, the server is expandable, and if multiple intelligent algorithm effects are required to be overlapped, only the server group is required to be expanded.
Specifically, the specific embodiment of determining three-dimensional coordinate information of an intersection of the line-of-sight vector of the display device on the surface of the spherical panorama model based on the acquired current field-of-view information of the display device and the spherical panorama model of the target area in step S2 is as follows.
In this embodiment, the sphere panoramic model is generated by rendering by the display device based on the panoramic image having the detection information.
In an embodiment, determining three-dimensional coordinate information of an intersection of a line-of-sight vector on a surface of a sphere panoramic model specifically includes the following steps.
Referring to fig. 2, fig. 2 is a flowchart of an embodiment of step S2 in the image display method provided in fig. 1.
S21: and determining a ray equation corresponding to the sight line based on the current visual field information of the display device.
Specifically, the current field of view information of the display device includes field angle information, position information, and orientation information corresponding to the left eye and the right eye, respectively. The view angle information consists of four angle components, namely an upper angle component, a lower angle component, a left angle component and a right angle component; the position information consists of three components of x, y and z; the orientation information consists of x, y, z, w four components, which are unit quaternions.
Position information based on the position information of the left eye and the position information of the right eye are used as the start position of the line of sight.
Position= (View [0]. Position+view [1]. Position)/2 (formula 1)
Wherein: view [0]. Position represents position information of the left eye; view [1] position represents position information of the right eye.
The direction information of the line of sight is determined from the direction information of the left eye and the direction information of the right eye.
Wherein: view [0]. Orientation represents the Orientation information of the left eye; view [1] Orientation represents Orientation information of the right eye.
Specifically, a ray equation P (t) corresponding to the line of sight is determined from the position information and the orientation information corresponding to the line of sight.
P (t) =position+D.t (equation 5)
Wherein: d represents three components dirX, dirY and dirZ.
S22: determining a spherical equation of the spherical panoramic model based on attribute information of the spherical panoramic model; the attribute information of the sphere panorama model includes a sphere radius.
Specifically, a spherical equation for the spherical panoramic model is determined based on the radius of the spherical panoramic model.
X 2+y2+z2=R2 (equation 6)
S23: three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model is determined based on the ray equation and the spherical equation.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of step S23 in the image display method provided in fig. 2.
S231: three-dimensional coordinate information of a candidate intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model is determined based on the ray equation and the spherical equation.
Specifically, based on the ray equation and the spherical equation, three-dimensional coordinate information of two candidate intersections of the line-of-sight vector of the display device on the surface of the spherical panorama model is determined asWherein the method comprises the steps of ,a=dirX*dirX+dirY*dirY+dirZ*dirZ;b=2*(dirX*position.x+dirY*position.y+dirZ*positio.z;c=position.x*position.x+position.y*position.y+position.z*position.z.
S232: and determining horizontal angle information and vertical angle information corresponding to each candidate intersection point based on the three-dimensional coordinate information of the candidate intersection point on the surface of the spherical panoramic model.
Specifically, the following formula is adopted to determine horizontal angle information and vertical angle information corresponding to each candidate intersection point based on three-dimensional coordinate information of the candidate intersection point on the surface of the spherical panoramic model.
The horizontal angle information and the vertical angle information corresponding to the two candidate intersection points can be calculated through the formula.
S233: and selecting a candidate intersection point with the horizontal angle information in the horizontal display view angle range and the vertical angle information in the vertical display view angle range as an intersection point.
Specifically, in response to the horizontal angle information corresponding to the candidate intersection being within the horizontal display view angle range of the spherical panoramic model and the vertical angle information corresponding to the candidate intersection being within the vertical display view angle range of the spherical panoramic model, the candidate intersection is retained.
And deleting the candidate intersection point in response to the horizontal angle information corresponding to the candidate intersection point not being in the horizontal display view angle range of the spherical panoramic model and/or the vertical angle information corresponding to the candidate intersection point not being in the vertical display view angle range of the spherical panoramic model.
And obtaining three-dimensional coordinate information of the intersection point of the sight line and the surface of the spherical panoramic model through the steps.
Specifically, the specific embodiment of determining and rendering the target image including the intersection point based on the three-dimensional coordinate information of the intersection point in step S3 is as follows.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S3 in the image display method provided in fig. 1.
S31: pixel information of the intersection in the panoramic image is determined based on the three-dimensional coordinate information of the intersection.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of step S31 in the image display method provided in fig. 4.
S311: and determining horizontal angle information and vertical angle information corresponding to the intersection point based on the three-dimensional coordinate information of the intersection point on the surface of the spherical panoramic model.
Specifically, the horizontal angle information and the vertical angle information corresponding to the reserved intersecting points can be calculated by the above formula 7 and formula 8.
S312: and determining pixel point coordinates of the intersection point on the panoramic image of the target area based on the horizontal angle information and the vertical angle information corresponding to the intersection point, the horizontal display view angle range and the vertical display view angle range and the resolution of the panoramic image.
Specifically, the pixel point coordinates (Pix x,Pixy) of the intersection on the panoramic image of the target area are determined based on the horizontal angle information and the vertical angle information corresponding to the intersection, the horizontal display view angle range and the vertical display view angle range, and the resolution of the panoramic image using the following formula.
S32: control parameters for displaying a target image including the cross point are determined based on pixel information of the cross point in the panoramic image.
Referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment of step S32 in the image display method provided in fig. 4.
S321: searching in a preset relation table based on pixel point coordinates of the intersection points on the panoramic image; the preset relation table comprises a plurality of preset coordinates, and each preset coordinate is provided with a corresponding preset operation parameter.
Specifically, a preset relation table is pre-built, wherein the preset relation table comprises a plurality of preset coordinates, and each preset coordinate has a corresponding preset operation parameter. The preset coordinates represent coordinates of any pixel point in the panoramic image.
In this embodiment, each pixel point in the panoramic image acquired by the panoramic acquisition device has a corresponding preset coordinate, the preset coordinate is converted into a PTZ coordinate on the spherical panoramic model by using a pre-established target conversion relationship, and the pitch angle and heading angle of the target acquisition device are adjusted according to the converted PTZ coordinate, so that the target image acquired by the target acquisition device can take the pixel point corresponding to the preset coordinate as a center point. The pitch angle and the course angle of the target acquisition device corresponding to each preset coordinate are used as preset operation parameters of the preset coordinates.
Through the preset relation table constructed above, the course angle and the pitch angle of rotation required by taking the pixel point as a central point in the picture of the target acquisition device can be determined according to the coordinate position of each pixel point in the panoramic image.
S322: taking a preset operation parameter corresponding to a preset coordinate matched with the pixel point coordinate as a control parameter of a target acquisition device for acquiring a target image, wherein the control parameter comprises a yaw angle and a pitch angle; the target image is an enlarged image of a local area in the panoramic image with the intersection as a center point.
Specifically, the coordinates of the pixel points are compared with preset coordinates in a preset relation table, the preset coordinates matched with the coordinates of the pixel points are determined, and the response speed is improved.
And taking the preset operation parameter corresponding to the preset coordinate as a control parameter of a target acquisition device for acquiring the target image, so that the center point of the target image displayed in the target acquisition device regulated based on the control parameter is the pixel point coordinate.
In an embodiment, the preset operating parameters further include adjusting magnification and focal length.
S33: and adjusting the target acquisition device based on the control parameters, and acquiring a target image displayed by the target acquisition device.
Specifically, the server sends the control parameters to the target acquisition device so that the target acquisition device rotates to a corresponding angle based on the control parameters to display the target image.
And adjusting the target acquisition device based on the control parameters so as to enable the target acquisition device to rotate based on the yaw angle and the pitch angle, and enabling the center point of the target image displayed in the target acquisition device to be an intersection point.
And according to the coordinate position of the intersection in the panoramic image and the calibration results of the panoramic acquisition device and the target acquisition device, the target acquisition device is mobilized to rotate to the position of the focusing intersection of the target image.
In an embodiment, the display device further comprises a handle. The handle consists of a trigger key, a side key, a rocker, an APP/return key, a HOME key, an A/X key and a B/Y key.
In one embodiment OpenXR SDK may obtain the click and release status of each key, the depression and release status of the trigger key, the depression value, the click and release status of the rocker, the rocker position, etc. Wherein, the A/X key and the B/Y key can control the zoom function of the target acquisition device; the ball machine can be started to increase the multiple when the B/Y key is pressed, and the target acquisition device can be stopped to increase the multiple when the B/Y key is released; when the A/X key is pressed, the target acquisition device can be started to reduce multiple, and when the A/X key is released, the target acquisition device is stopped to reduce multiple.
Specifically, the rotation function of the target acquisition device is controlled by using the rocker position, the values of the rocker position obtained in real time are respectively the x value in the horizontal direction and the y value in the vertical direction, and the last direction state and the current direction state of the target acquisition device are recorded. The coordinate system of the rocker is positive on the right of the x-axis and positive on the up of the y-axis, and the coordinate system of the rocker corresponds to the right and up rotation of the target acquisition device respectively. When the touch state of the target acquisition device is acquired, the current direction state is given according to the current x value and the current y value, when both x and y are equal to 0, the state is a default state, and the target acquisition device is controlled to stop moving; other states are defined as up, down, left, right, upper left, lower left, upper right, lower right directions, respectively, according to the values of x and y. If the current direction dynamic state is consistent with the previous direction state, no operation is performed; and if the current direction state is inconsistent with the last direction state, firstly calling the motion stopping function of the last state, and then calling the motion starting function of the current state.
The display state of the target image is controlled using the trigger key of the handle. In the initial state, the target image is displayed as the upper right corner of the current visual field by default, when the trigger key is triggered, the display position of the target image gradually moves from the upper right corner to the center of the visual field within a given frame number, and the rendering plane gradually increases; when the trigger key is actuated again, the display position of the target image will gradually move from the center of the field of view to the upper right corner of the field of view, and the rendering plane gradually reduces. Specifically, the positions and the scaling scales of the upper right corner and the center of the visual field are predefined, the key states of the triggers are recorded in real time for accumulation, when the number of key presses is odd, the current drawing position and the current scaling scale are calculated according to the positions and the scaling scales of the center of the visual field and the upper right corner and the current frame number, and the current drawing position and the current scaling scale are transmitted to the rendering layer for drawing.
The target acquisition device control function is implemented ,CLIENT_NET_API BOOL CALL_METHOD CLIENT_DHPTZControlEx(LLONG lLoginID,int nChannelID,DWORD dwPTZCommand,LONG lParam1,LONG lParam2,LONG lParam3,BOOL dwStop). mainly through the client_ DHPTZControlEx interface in NetSDK, wherein lLoginID represents the login ID of the target acquisition device, nchannel ID represents the channel number of the target acquisition device, dwPTZCommand is the control command type of the target acquisition device, lParam1, lParam2, lParam3 respectively represent parameters required by the control commands of different target acquisition devices, and dwStop represents the motion state of the target acquisition device.
The linkage function of the panoramic acquisition device and the target acquisition device is realized through transmitting dwPTZCommand to DH_ EXTPTZ _ EXACTGOTO, wherein lParam is used for transmitting the parameters P, lParam is used for transmitting the parameters T and lParam is used for transmitting the parameters Z. The linkage function of the panoramic acquisition device and the target acquisition device is gun-ball linkage, and the gun-ball linkage is a mechanism for carrying out joint control and cooperative work on a gun-type camera and a spherical camera through an intelligent technology. This linkage function enables two different types of image capturing apparatuses to cooperate with each other according to a preset rule or a real-time trigger event. In linkage, once the spherical camera detects specific behaviors or abnormal events (such as intrusion detection, people counting and the like), the system automatically adjusts the direction and focal length of the gun-type camera to accurately aim at and track the target so as to acquire a clearer and more detailed picture.
The rotation function of the target acquisition device is rotated to the upward, downward, leftward, rightward, upper left, lower left, upper right and lower right directions respectively through transmission dwPTZCommand to be DH_PTZ_UP_CONTROL、DH_PTZ_DOWN_CONTROL、DH_PTZ_LEFT_CONTROL、DH_PTZ_RIGHT_CONTROL、DH_EXTPTZ_LEFTTOP、DH_EXTPTZ_LEFTDOWN、DH_EXTPTZ_RIGHTTOP、DH_EXTPTZ_RIGHTDOWN, wherein the rotation in a single direction only needs to transmit a parameter lParam2 to represent the running speed; the rotation in the composite direction needs to be additionally transmitted lParam to represent the running speed; dwStop parameter 0 indicates start of movement and parameter 1 indicates stop of movement.
In one embodiment, the target image is displayed by adjusting the rotation of the target acquisition device by the handle based on the control parameter.
In one embodiment, magnification adjustment and/or focusing of the target image is performed by a handle.
Specifically, ZOOM-in and ZOOM-out of the target acquisition device are realized by transmitting dwPTZCommand dh_ptz_zoom_add_control and dh_ptz_zoom_dec_control, respectively, with reference lParam2, dwStop, with reference 0, and reference 1, with reference stopping.
Different keys of the handle provided in the embodiment can respectively control zoom and focus of the target acquisition device, and the turntable of the handle can control rotation of the ball machine.
The image display method provided in this embodiment is applicable to a server, where the server is in communication connection with a display device, the display device includes a display apparatus, and the image display method includes: performing target detection on the panoramic image of the obtained target area to obtain detection information corresponding to the panoramic image; determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information; determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point; and sending the target image to a display device for rendering and displaying. According to the method, the server is in communication connection with the display equipment, after the obtained panoramic image is subjected to target detection through the server, three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model is determined based on current visual field information of the display device and the spherical panoramic model of the target area, the target image is further determined, and the target image is sent to the display device for rendering and displaying, so that the target detection result can be checked while the monitoring space is checked in all directions, and experience and appearance are improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating an image display method according to another embodiment of the invention.
In this embodiment, an image display method is provided, the image display method is applicable to a display device, a server is communicatively connected to the display device, the display device includes a display apparatus, and the image display method includes the following steps.
The image display method provided by the embodiment can be suitable for monitoring the field space, and the method is executed by a display device, wherein the display device can be realized in a software and/or hardware mode, and the display device can be VR glasses, VR helmets, handheld display devices and the like worn by a user.
The image display method provided by the embodiment of the application is described below taking a display device as an execution subject as an example.
S5: and receiving the target image sent by the server and rendering.
S6: and displaying the rendered target image and the spherical panoramic model.
Specifically, the target image and the sphere panoramic model in the present embodiment are transmitted through the server in the image display method in the above-described embodiment.
In one embodiment, the image display method further includes the following steps.
Referring to fig. 8, fig. 8 is a flowchart illustrating an embodiment of a method for obtaining a spherical panorama model in an image display method according to the present invention.
S501: and receiving the panoramic image and detection information corresponding to the panoramic image sent by the server.
Specifically, after receiving the encoded frame of the panoramic image sent by the server, the display device decodes the encoded frame of the panoramic image to obtain the panoramic image and detection information of the panoramic image.
S502: and carrying out three-dimensional rendering on the panoramic image with the detection information to obtain a spherical panoramic model.
Referring to fig. 9, fig. 9 is a flowchart illustrating an embodiment of step S502 in the method for obtaining a spherical panorama model provided in fig. 8.
S5021: an initial sphere model is built in advance, and the surface of the initial sphere model is provided with a plurality of vertex data and texture data corresponding to the vertex data.
Specifically, the size information of the display screen of the display device is obtained, and then a virtual initial sphere model is established according to the size information, wherein the diameter R of the initial sphere model can be smaller than or equal to the size information, so that the display screen of the display device can completely display the initial sphere model.
In order to provide a panoramic effect that can be viewed from any angle from above, below, left and right, a panoramic image is generally required to be rendered in a display range by attaching the panoramic image to the surface of an initial sphere model in the process of rendering the panoramic image.
In this embodiment, a 360-degree panoramic rendering method is used to render the panoramic image onto the initial sphere model. And in particular image rendering with OpenglES.
In an embodiment, the surface of the initial sphere model is divided into a plurality of triangular patches, and a series of vertex coordinates are obtained by means of equal longitude and latitude division.
Specifically, the vertex coordinates of each degree (hAngle, vAngle) in the initial sphere model are determined. The value range of the horizontal direction angle hAngle of the initial sphere model is 0-360 degrees, and the value range of the vertical direction angle vAngle is 0-180 degrees.
The three-dimensional coordinates (x, y, z) of each vertex on the sphere of the initial sphere model are calculated based on the following formula.
Panoramic images are typically two-dimensional images in which the positional information of each pixel in the image can be represented by its coordinates. Since the panoramic image is rendered and displayed in a manner of being attached to the surface of the initial sphere model, the vertex of each initial sphere model can correspond to a pixel point in the panoramic image, and the coordinate of the pixel point is generally called as the texture coordinate corresponding to the vertex of the corresponding sphere model.
And calculating the coordinate (tex x,texy) corresponding to each vertex in the panoramic image on the spherical surface of the initial sphere model based on the following formula, wherein the coordinate is taken as the texture coordinate corresponding to each vertex.
Tex x =1-hAngle/360 (formula 14)
Tex y =1-vAngle/180 (equation 15)
S5022: a sphere panoramic model is generated based on the vertex data, texture data, and the panoramic image with the detection information on the initial sphere model.
Specifically, the panoramic image is mapped as a texture on the surface of the initial sphere model such that a partial image of the panoramic image is stored within each triangular patch.
Through the steps, the display equipment can render the panoramic image acquired by the panoramic acquisition device to the surface of the initial spherical model, so that the spherical panoramic model with the image content rendered on the surface is obtained. When moving along the surface of the sphere panoramic model, color information is sampled from the panoramic image according to texture coordinates of the current vertex, so that the panoramic image is correctly displayed on the surface of the sphere.
In an embodiment, the method for obtaining the spherical panorama model of the target area further comprises the following steps.
In panoramic image presentation, not the entire panoramic image but a panoramic image within the sphere panoramic model display range, which is generally displayed to the user, may be generally determined by the line of sight direction of the viewer, a preset angle of view, and a sphere model radius (which may be known from sphere model data, see description in step S5021 later for sphere model data). The preset angles of view include an angle of view (fov h) corresponding to the horizontal direction and an angle of view (fov v) corresponding to the vertical direction.
S5023: and determining a horizontal display view angle range and a vertical display view angle range corresponding to the spherical panoramic model based on the current visual field information of the panoramic acquisition device for acquiring the panoramic image of the target area.
Specifically, the current field of view information of the panorama acquisition device includes a horizontal direction field angle (fov h), a vertical direction field angle (fov v), and a downtilt angle (P down) of the panorama acquisition device. And determining an initial angle and a termination angle of the panoramic image at the drawing position of the three-dimensional rendering spherical surface according to the horizontal view angle, the vertical view angle and the downward inclination angle of the panoramic acquisition device.
And calculating to obtain a horizontal display view angle range and a vertical display view angle range corresponding to the spherical panoramic model by adopting the following formula.
HAngle right = 360-hAngle _left (equation 17)
S5024: and displaying the images of the surface of the spherical panoramic model in the horizontal display view angle range and the vertical display view angle range.
In one embodiment, the panoramic image is texture mapped to surfaces within a horizontal display field angle range (hAngle left,hAngleright) and a vertical display field angle range (vAngle top,hAnglebot) of the initial sphere model such that a partial image of the panoramic image is stored within each triangular patch within the horizontal display field angle range and the vertical display field angle range.
Specifically, the specific embodiment of receiving and rendering the target image sent by the server in step S5 is as follows.
And the display device receives the target image sent by the server and then performs two-dimensional rendering on the target image.
Specifically, performing plane rendering on the target image to obtain a rendered target image.
In one embodiment, an initial planar model is obtained, the initial planar model including a plurality of vertex data, each vertex data having texture data. The target image is a two-dimensional image in which the positional information of each pixel point in the image can be represented by its coordinates. Since the target image is rendered and displayed in a manner of being attached to the surface of the initial plane model, the vertex of each initial plane model can correspond to a pixel point in the target image, and the coordinate of the pixel point is generally called as texture coordinate corresponding to the vertex of the corresponding initial plane model.
The target image is mapped as a texture on the surface of the initial planar model such that a partial image of the target image is stored within each triangular patch.
Through the steps, the display equipment can render the target image to the surface of the initial plane model, so that the plane model with the image content rendered on the surface is obtained.
Specifically, the specific embodiment of displaying the rendered target image and the sphere panoramic model in step S6 is as follows.
In this embodiment, openg lES is used to perform three-dimensional rendering on the panoramic image and two-dimensional rendering on the target image, so that the rendered target image and the spherical panoramic model are displayed on the display device. The display device displays the rendered target image suspended on the surface of the spherical panoramic model. The detection information in the panoramic image is overlapped and displayed on the surface of the spherical panoramic model in the forms of rectangular frames, polygonal frames and the like.
The image display method provided in this embodiment is applicable to a display device, where the display device is communicatively connected to a server, and the display device includes a display apparatus, and the image display method includes: receiving and rendering the target image sent by the server; displaying the rendered target image and the spherical panoramic model; the target image is acquired by the image display method described above. According to the application, the server is in communication connection with the display equipment, the server is used for carrying out target detection on the acquired panoramic image, then the panoramic image with detection information is subjected to three-dimensional rendering based on the display device, and the acquired target image is subjected to two-dimensional rendering and then displayed, so that camera pixels can be transmitted to the VR device in real time for real-time display, and various intelligent algorithms can be loaded on the server for monitoring key information detection, and experience impression is improved.
Referring to fig. 10, fig. 10 is a schematic diagram of a frame of an image display device according to an embodiment of the invention. The present embodiment provides an image display apparatus 60, the image display apparatus 60 includes a server, the server is in communication connection with a display device, the display device includes a display apparatus, and the image display apparatus 60 includes a detection module 61, an analysis module 62, a determination module 63, and a transmission module 64.
The detection module 61 is configured to perform target detection on the obtained panoramic image of the target area, so as to obtain detection information corresponding to the panoramic image.
The analysis module 62 is configured to determine three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the spherical panoramic model based on the acquired current field-of-view information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is rendered based on the panoramic image with the detection information.
The determining module 63 is configured to determine a target image including the intersection point based on the three-dimensional coordinate information of the intersection point.
The sending module 64 is configured to send the target image to a display device for rendering and displaying.
According to the method, the server is in communication connection with the display equipment, after the obtained panoramic image is subjected to target detection through the server, three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model is determined based on current visual field information of the display device and the spherical panoramic model of the target area, the target image is further determined, and the target image is sent to the display device for rendering and displaying, so that the target detection result can be checked while the monitoring space is checked in all directions, and experience and appearance are improved.
Referring to fig. 11, fig. 11 is a schematic diagram of a frame of an image display device according to another embodiment of the invention.
The present embodiment provides an image display apparatus 70, the image display apparatus 70 includes a display device, the display device is communicatively connected with a server, the display device includes a display apparatus, and the image display apparatus 70 includes a rendering module 71 and a display module 72.
The rendering module 71 is configured to receive and render the target image sent by the server.
The display module 72 is used for displaying the rendered target image and the sphere panoramic model; the target image is acquired by the image display method described above.
According to the image display device provided by the embodiment, the server is in communication connection with the display equipment, after the obtained panoramic image is subjected to target detection through the server, the panoramic image with detection information is subjected to three-dimensional rendering based on the display device, and the obtained target image is subjected to two-dimensional rendering and then displayed, so that camera pixels can be transmitted to the VR device in real time for real time display, various intelligent algorithms can be loaded on the server for monitoring key information detection, and experience impression is improved.
Referring to fig. 12, fig. 12 is a schematic diagram of a frame of a terminal according to an embodiment of the invention. The terminal 80 comprises a memory 81 and a processor 82 coupled to each other, the processor 82 being adapted to execute program instructions stored in the memory 81 for implementing the steps of any of the image display method embodiments described above. In one particular implementation scenario, terminal 80 may include, but is not limited to: the microcomputer, server, and the terminal 80 may also include mobile devices such as a notebook computer and a tablet computer, which are not limited herein.
In particular, the processor 82 is adapted to control itself and the memory 81 to implement the steps of any of the image display method embodiments described above. The processor 82 may also be referred to as a CPU (Central Processing Unit ). The processor 82 may be an integrated circuit chip having signal processing capabilities. The Processor 82 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 82 may be commonly implemented by an integrated circuit chip.
Referring to fig. 13, fig. 13 is a schematic diagram of a frame of an embodiment of a computer readable storage medium according to the present invention. The computer readable storage medium 90 stores program instructions 901 executable by a processor, the program instructions 901 for implementing the steps of any one of the image display method embodiments described above.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is only the embodiments of the present invention, and therefore, the patent protection scope of the present invention is not limited thereto, and all equivalent structures or equivalent flow changes made by the content of the present specification and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the patent protection scope of the present invention.

Claims (10)

1. An image display method, wherein the image display method is applied to a server, the server is in communication connection with a display device, the display device comprises a display device, and the image display method comprises:
performing target detection on the panoramic image of the obtained target area to obtain detection information corresponding to the panoramic image;
Determining three-dimensional coordinate information of an intersection point of a sight vector of the display device on the surface of the spherical panoramic model based on the acquired current visual field information of the display device and the spherical panoramic model of the target area; the sphere panoramic model is generated by rendering based on the panoramic image with the detection information;
determining a target image containing the intersection point based on the three-dimensional coordinate information of the intersection point;
and sending the target image to the display device for rendering and displaying.
2. The method for displaying an image according to claim 1, wherein,
The determining three-dimensional coordinate information of an intersection point of a line-of-sight vector of the display device on a surface of the sphere panoramic model based on the acquired current field-of-view information of the display device and the sphere panoramic model of the target area includes:
determining a ray equation corresponding to current visual field information based on the current visual field information of the display device; the current visual field information of the display device comprises visual field angle information, position information and orientation information corresponding to left eyes and right eyes respectively;
Determining a spherical equation of the spherical panoramic model based on attribute information of the spherical panoramic model; the attribute information of the sphere panoramic model comprises sphere radius;
three-dimensional coordinate information of an intersection of a line-of-sight vector of the display device on a surface of the sphere panoramic model is determined based on the ray equation and the spherical equation.
3. The image display method according to claim 1 or 2, wherein,
The determining a target image including the intersection point based on the three-dimensional coordinate information of the intersection point includes:
Determining pixel information of the intersection in the panoramic image based on the three-dimensional coordinate information of the intersection;
Determining control parameters for displaying a target image containing the intersection point based on pixel information of the intersection point in the panoramic image;
and adjusting a target acquisition device based on the control parameters, and acquiring the target image displayed by the target acquisition device.
4. The image display method according to claim 3, wherein the pixel information includes pixel point coordinates;
The determining pixel information of the intersection in the panoramic image based on the three-dimensional coordinate information of the intersection includes:
Determining horizontal angle information and vertical angle information corresponding to the intersection point based on three-dimensional coordinate information of the intersection point on the surface of the spherical panoramic model;
And determining pixel point coordinates of the intersection on the panoramic image of the target area based on the horizontal angle information and the vertical angle information corresponding to the intersection, a horizontal display view angle range and a vertical display view angle range and the resolution of the panoramic image.
5. The image display method according to claim 3, wherein the display device further comprises a handle;
The determining, based on pixel information of the intersection in the panoramic image, a control parameter for displaying a target image including the intersection includes:
searching in a preset relation table based on pixel point coordinates of the intersection points on the panoramic image; the preset relation table comprises a plurality of preset coordinates, and each preset coordinate has a corresponding preset operation parameter;
taking the preset operation parameters corresponding to the preset coordinates matched with the pixel point coordinates as control parameters of a target acquisition device for acquiring the target image, wherein the control parameters comprise a yaw angle and a pitch angle; the target image is an enlarged image of a local area in the panoramic image with the intersection as a center point;
the adjusting the target acquisition device based on the control parameter, and acquiring the target image displayed by the target acquisition device, includes:
adjusting the rotation of the target acquisition device based on the control parameters through the handle, and displaying the target image;
and performing magnification adjustment and/or focusing on the target image through the handle.
6. An image display method, wherein the image display method is applied to a display device, the display device is in communication connection with a server, the display device comprises the display apparatus, and the image display method comprises:
Receiving a target image sent by the server and rendering the target image;
Displaying the rendered target image and the spherical panoramic model; the target image is acquired by the image display method according to any one of claims 1 to 5.
7. The image display method according to claim 6, wherein,
The image display method further includes:
receiving a panoramic image sent by the server and detection information corresponding to the panoramic image;
And carrying out three-dimensional rendering on the panoramic image with the detection information to obtain the spherical panoramic model.
8. The image display method according to claim 7, wherein,
The three-dimensional rendering of the panoramic image with the detection information to obtain the sphere panoramic model comprises the following steps:
Pre-constructing an initial sphere model, wherein the surface of the initial sphere model is provided with a plurality of vertex data and texture data corresponding to the vertex data;
Generating the sphere panoramic model based on each of the vertex data, the texture data, and the panoramic image with the detection information on the initial sphere model;
Determining a horizontal display view angle range and a vertical display view angle range corresponding to the spherical panoramic model based on current visual field information of a panoramic acquisition device for acquiring the panoramic image of the target area;
and displaying the images of the surface of the spherical panoramic model in the horizontal display view angle range and the vertical display view angle range.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and running on the processor, the processor being adapted to execute program data to carry out the steps of the image display method according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the image display method according to any one of claims 1 to 8.
CN202410318200.5A 2024-03-19 2024-03-19 Image display method, terminal and computer readable storage medium Pending CN118301286A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410318200.5A CN118301286A (en) 2024-03-19 2024-03-19 Image display method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410318200.5A CN118301286A (en) 2024-03-19 2024-03-19 Image display method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN118301286A true CN118301286A (en) 2024-07-05

Family

ID=91679103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410318200.5A Pending CN118301286A (en) 2024-03-19 2024-03-19 Image display method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN118301286A (en)

Similar Documents

Publication Publication Date Title
EP3606082B1 (en) Panoramic video playback method and client terminal
US10437545B2 (en) Apparatus, system, and method for controlling display, and recording medium
US20200288108A1 (en) Method, apparatus, terminal, capturing system and device for setting capturing devices
WO2018121333A1 (en) Real-time generation method for 360-degree vr panoramic graphic image and video
US8723951B2 (en) Interactive wide-angle video server
US10650590B1 (en) Method and system for fully immersive virtual reality
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
KR20190031504A (en) Method and system for interactive transmission of panoramic video
WO2019161814A2 (en) Panoramic imaging system and method
KR20170132669A (en) Method, apparatus and stream for immersive video format
US20210218890A1 (en) Spherical image processing method and apparatus, and server
US20220329880A1 (en) Video stream processing method and apparatus, device, and medium
CN113223130B (en) Path roaming method, terminal equipment and computer storage medium
KR20190046850A (en) Method, apparatus and stream for immersive video formats
EP3434021B1 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
CN110807413B (en) Target display method and related device
CN113286138A (en) Panoramic video display method and display equipment
CN112995491A (en) Video generation method and device, electronic equipment and computer storage medium
US20190295324A1 (en) Optimized content sharing interaction using a mixed reality environment
JP2018033107A (en) Video distribution device and distribution method
Meinel et al. Effective display resolution of 360 degree video footage in virtual reality
US20170169572A1 (en) Method and electronic device for panoramic video-based region identification
CN118301286A (en) Image display method, terminal and computer readable storage medium
CN112672057B (en) Shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination