CN114416244A - Information display method and device, electronic equipment and storage medium - Google Patents

Information display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114416244A
CN114416244A CN202111670093.5A CN202111670093A CN114416244A CN 114416244 A CN114416244 A CN 114416244A CN 202111670093 A CN202111670093 A CN 202111670093A CN 114416244 A CN114416244 A CN 114416244A
Authority
CN
China
Prior art keywords
acquisition
display
coordinate
point
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111670093.5A
Other languages
Chinese (zh)
Other versions
CN114416244B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202111670093.5A priority Critical patent/CN114416244B/en
Publication of CN114416244A publication Critical patent/CN114416244A/en
Application granted granted Critical
Publication of CN114416244B publication Critical patent/CN114416244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a method and a device for displaying information, electronic equipment and a storage medium, wherein the method comprises the following steps: in the image acquisition process, the space acquisition point where the terminal is located is acquired and mapped to the corresponding coordinate, and then the corresponding display acquisition point, acquisition track and house type graph are obtained through calculation.

Description

Information display method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an information display method, an information display apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of network technology, online house-looking has become an important way for people to find houses. For online house watching, a personal house owner or a broker is required to shoot the physical house space of a target house source, and then corresponding pictures are uploaded to a corresponding platform, so that a house searching user can search for a house source matched with the demand of the house searching user in the platform. In order to guarantee the authenticity, space perception and the like of house source display, image acquisition can be carried out on the physical house space of the target house source in a panoramic acquisition mode to obtain a panoramic image corresponding to the target house source, so that a house finding user can browse corresponding house source information through the panoramic image. In a related acquisition process, a real-time image stream acquired by a terminal camera is often only displayed in an acquisition interface, and image acquisition is performed according to acquisition operation of a user, but the user cannot acquire more real-time information such as an acquisition track, a house type diagram and the like in the process, and even if the terminal can display the acquisition track, the house type diagram and the like, the interface is blocked during drawing due to the fact that the number of the acquisition track, the house type diagram and the like is large, and the acquisition flow of the user is affected.
Disclosure of Invention
The embodiment of the invention provides an information display method, an information display device, electronic equipment and a computer-readable storage medium, and aims to solve or partially solve the problems that interface jamming is easy to occur and an acquisition flow is influenced in the process of drawing specific house source information in the image acquisition process in the related technology.
The embodiment of the invention discloses an information display method, which provides a graphical user interface through a preset terminal, wherein the content displayed on the graphical user interface at least comprises an acquisition interface, and the method comprises the following steps:
responding to a first acquisition operation, determining an Nth space acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first space attribute corresponding to the Nth space acquisition point;
displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current floor plan in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
responding to the movement of a preset terminal, and acquiring a real-time position coordinate of the preset terminal;
displaying an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate;
responding to a second acquisition operation, determining an (N + 1) th space acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second space attribute corresponding to the (N + 1) th space acquisition point;
displaying an (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current user type graph into a target user type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
Optionally, the obtaining a first spatial attribute corresponding to the nth spatial acquisition point includes:
based on the Nth space acquisition point, acquiring an image of a first target space where the preset terminal is located, and acquiring a first target image corresponding to the Nth space acquisition point;
and identifying the first target image to obtain a first space attribute of the first target space.
Optionally, the displaying, according to the first acquisition coordinate and the first spatial attribute, an nth display acquisition point corresponding to the nth spatial acquisition point and a current floor plan in the acquisition interface includes:
acquiring a reference position point positioned on the acquisition interface and a reference coordinate of the reference position point;
acquiring a track scaling coefficient and a dragging scaling coefficient aiming at the first acquisition coordinate;
calculating a first display coordinate of the Nth space acquisition point on the graphical user interface by adopting the reference coordinate, the first acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying an Nth display and acquisition point corresponding to the Nth spatial acquisition point in the acquisition interface based on the first display coordinate;
and displaying the current user type graph corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute.
Optionally, the displaying, in the acquisition interface, a current user-type diagram corresponding to the nth spatial acquisition point according to the first display coordinate and the first spatial attribute includes:
determining a plurality of first vertexes corresponding to the Nth display acquisition point in the acquisition interface by adopting the first contour parameter and the first display coordinate;
acquiring a vertex display coordinate of the first vertex;
and generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates of the first vertexes and the first display coordinates.
Optionally, the first space attribute further includes wall objects of the first target space and positions of the wall objects in the first target space, where the wall objects include at least a first wall object and a second wall object, and the second wall object includes a door body object; generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates and the first display coordinates of each first vertex, wherein the method comprises the following steps:
generating a first house type graph corresponding to the Nth display acquisition point by adopting the vertex display coordinates of each first vertex and the first display coordinates;
matching the first wall object and the second wall object with the first house type diagram based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, and adjusting the display of the first house type diagram to obtain a second house type diagram;
and displaying the second floor plan in the acquisition interface.
Optionally, the acquisition interface includes a position indication identifier, where the position indication identifier is used to indicate an acquisition direction of the preset terminal; the displaying, according to the real-time position coordinates, a collection trajectory connecting the nth display collection point in the collection interface includes:
calculating real-time display coordinates of the position indication mark on the acquisition interface by adopting the reference coordinates, the real-time position coordinates, the track scaling coefficient and the dragging scaling coefficient;
and displaying the position indication mark in the acquisition interface according to the real-time display coordinate, and displaying an acquisition track connecting the position indication mark and the Nth display acquisition point.
Optionally, the method further comprises:
responding to the movement of the preset terminal, and acquiring a coordinate distance between the real-time position coordinate and the reference coordinate;
if the coordinate distance is larger than a preset distance threshold, calculating a distance ratio between the coordinate distance and the preset distance threshold;
and adopting the distance ratio to zoom the current house type graph in real time or zoom the target house type graph in real time.
Optionally, the displaying, according to the second acquisition coordinate and the second spatial attribute, an N +1 th display acquisition point corresponding to the N +1 th spatial acquisition point on the acquisition track in the acquisition interface includes:
calculating a second display coordinate of the (N + 1) th space acquisition point on the graphical user interface by adopting the reference coordinate, the second acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying the (N + 1) th display and acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate.
Optionally, the second spatial attribute at least includes a second contour parameter of a second target space corresponding to a second target image, and the updating the current user-type diagram to a target user-type diagram corresponding to the nth display acquisition point and the (N + 1) th display acquisition point includes:
determining a plurality of second vertexes corresponding to the (N + 1) th display acquisition point in the acquisition interface by adopting the second display coordinates and the second contour parameters;
acquiring a vertex display coordinate of the second vertex;
and updating the current house type graph into a target house type graph corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point by adopting the vertex display coordinates of the first vertexes, the vertex display coordinates of the second vertexes, the first display coordinates and the second display coordinates.
Optionally, the method further comprises:
responding to dragging operation aiming at the acquisition interface, and determining touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate;
calculating a deviation parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate;
and adjusting the display positions of the Nth display acquisition point and the current indoor graph on the acquisition interface or adjusting the display positions of the Nth display acquisition point, the target acquisition point, the acquisition track and the target indoor graph on the acquisition interface by adopting the reference coordinates and the offset parameters.
Optionally, the method further comprises:
responding to the zooming operation aiming at the acquisition interface, and determining the zooming proportion corresponding to the zooming operation;
and zooming the current user-type graph or zooming the acquisition track and the target user-type graph by adopting the zooming coefficient.
The embodiment of the invention also discloses an information display device, which provides a graphical user interface through a preset terminal, wherein the content displayed on the graphical user interface at least comprises an acquisition interface, and the device comprises:
the first information acquisition module is used for responding to a first acquisition operation, determining an Nth space acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first space attribute corresponding to the Nth space acquisition point;
the first content display module is used for displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current user-type graph in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
the real-time coordinate acquisition module is used for responding to the movement of a preset terminal and acquiring the real-time position coordinate of the preset terminal;
the acquisition track display module is used for displaying an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate;
the second information acquisition module is used for responding to a second acquisition operation, determining an (N + 1) th space acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second space attribute corresponding to the (N + 1) th space acquisition point;
and the second content display module is used for displaying the (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current house type graph into a target house type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
Optionally, the first information obtaining module includes:
the image acquisition submodule is used for acquiring an image of a first target space where the preset terminal is located based on the Nth space acquisition point to obtain a first target image corresponding to the Nth space acquisition point;
and the image identification submodule is used for identifying the first target image and obtaining a first space attribute of the first target space.
Optionally, the first content display module includes:
the reference coordinate acquisition submodule is used for acquiring a reference position point positioned on the acquisition interface and a reference coordinate of the reference position point;
the zooming coefficient acquisition submodule is used for acquiring a track zooming coefficient and a dragging zooming coefficient aiming at the first acquisition coordinate;
the first display coordinate calculation submodule is used for calculating a first display coordinate of the Nth space acquisition point on the graphical user interface by adopting the reference coordinate, the first acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
the first display acquisition point determining submodule is used for displaying the Nth display acquisition point corresponding to the Nth spatial acquisition point in the acquisition interface based on the first display coordinate;
and the first house type graph display sub-module is used for displaying the current house type graph corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute.
Optionally, the first spatial attribute at least includes a first contour parameter of a first target space corresponding to the first target image, and the first house type diagram display sub-module is specifically configured to:
determining a plurality of first vertexes corresponding to the Nth display acquisition point in the acquisition interface by adopting the first contour parameter and the first display coordinate;
acquiring a vertex display coordinate of the first vertex;
and generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates of the first vertexes and the first display coordinates.
Optionally, the first space attribute further includes wall objects of the first target space and positions of the wall objects in the first target space, where the wall objects include at least a first wall object and a second wall object, and the second wall object includes a door body object; the first house type diagram display sub-module is specifically configured to:
generating a first house type graph corresponding to the Nth display acquisition point by adopting the vertex display coordinates of each first vertex and the first display coordinates;
matching the first wall object and the second wall object with the first house type diagram based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, and adjusting the display of the first house type diagram to obtain a second house type diagram;
and displaying the second floor plan in the acquisition interface.
Optionally, the acquisition interface includes a position indication identifier, where the position indication identifier is used to indicate an acquisition direction of the preset terminal; the acquisition track display module comprises:
the real-time display coordinate determination submodule is used for calculating the real-time display coordinate of the position indication mark on the acquisition interface by adopting the reference coordinate, the real-time position coordinate, the track zoom coefficient and the dragging zoom coefficient;
and the acquisition track display submodule is used for displaying the position indication mark in the acquisition interface according to the real-time display coordinate and displaying an acquisition track connecting the position indication mark and the Nth display acquisition point.
Optionally, the method further comprises:
the coordinate distance acquisition module is used for responding to the movement of the preset terminal and acquiring the coordinate distance between the real-time position coordinate and the reference coordinate;
the distance ratio determining module is used for calculating the distance ratio between the coordinate distance and a preset distance threshold if the coordinate distance is greater than the preset distance threshold;
and the first zooming module is used for zooming the current house type graph in real time by adopting the distance ratio or zooming the target house type graph in real time.
Optionally, the second information obtaining module includes:
the second display coordinate determination submodule is used for calculating a second display coordinate of the (N + 1) th space acquisition point on the graphical user interface by adopting the reference coordinate, the second acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
and the second display acquisition point determining submodule is used for displaying the (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate.
Optionally, the second spatial attribute at least includes a second contour parameter of a second target space corresponding to a second target image, and the second content display module includes:
the vertex determining submodule is used for determining a plurality of second vertexes corresponding to the (N + 1) th display acquisition point in the acquisition interface by adopting the second display coordinates and the second contour parameters;
the vertex display coordinate acquisition submodule is used for acquiring the vertex display coordinate of the second vertex;
and the second house type graph determining submodule is used for updating the current house type graph into a target house type graph corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point by adopting the vertex display coordinates of each first vertex, the vertex display coordinates of each second vertex, the first display coordinates and the second display coordinates.
Optionally, the method further comprises:
the touch information determining module is used for responding to dragging operation aiming at the acquisition interface and determining touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate;
the offset parameter determining module is used for calculating an offset parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate;
and the interface translation module is used for adjusting the display positions of the Nth display acquisition point and the current user-type diagram on the acquisition interface or adjusting the display positions of the Nth display acquisition point, the target acquisition point, the acquisition track and the target user-type diagram on the acquisition interface by adopting the reference coordinates and the offset parameters.
Optionally, the method further comprises:
the zooming ratio determining module is used for responding to zooming operation aiming at the acquisition interface and determining the zooming ratio corresponding to the zooming operation;
and the interface zooming module is used for zooming the current user-type graph or zooming the acquisition track and the target user-type graph by adopting the zooming coefficient.
The embodiment of the invention also discloses electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory finish mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Also disclosed is a computer-readable storage medium having instructions stored thereon, which, when executed by one or more processors, cause the processors to perform a method according to an embodiment of the invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in the process of image acquisition by a user, when triggering image acquisition, a terminal can determine an Nth space acquisition point and acquire a corresponding acquisition coordinate and a space attribute of a space where the terminal is located, then display a corresponding Nth display acquisition point and a current house type graph in an acquisition interface according to the acquisition coordinate and the space attribute, move along with a user control terminal, the terminal can position a current real-time space position point according to real-time acquisition pose data and acquire a corresponding real-time position coordinate, then display an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate, when triggering image acquisition again by the user, the terminal can determine an N +1 th space acquisition point and acquire a corresponding acquisition coordinate and a space attribute of the space, then display a corresponding N +1 th display acquisition point on the acquisition track according to the acquisition coordinate and the space attribute and update the current house type graph into a position corresponding to each display acquisition point The target family-type graph is obtained until the image acquisition is completed, so that in the image acquisition process, the space acquisition point where the terminal is located is obtained and mapped to the corresponding coordinate, and then the corresponding display acquisition point, acquisition track and family-type graph are obtained through calculation.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for displaying information according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an acquisition interface provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of coordinate points provided in an embodiment of the invention;
FIG. 4 is a schematic diagram of an acquisition interface provided in an embodiment of the present invention;
FIG. 5 is a schematic diagram of an acquisition interface provided in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an acquisition interface provided in an embodiment of the present invention;
FIG. 7 is a schematic diagram of an acquisition interface provided in an embodiment of the present invention;
fig. 8 is a block diagram showing a display device of information provided in an embodiment of the present invention;
fig. 9 is a block diagram of an electronic device provided in an embodiment of the invention;
fig. 10 is a schematic diagram of a computer-readable storage medium provided in an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As an example, compared with a traditional online house-viewing, a panoramic house-viewing can provide a 360-degree viewing angle for a user, so that the user can effectively perceive the physical space of the house source from the perspective view, and a relatively real browsing scene can be provided for the user. For a panoramic house, a user needs to hold the acquisition equipment to acquire an image in a corresponding space, generate a panoramic image and then process the panoramic image, and finally generate corresponding panoramic data. In the acquisition process, in order to improve the acquisition experience of a user, a corresponding acquisition track and a house type outline can be displayed in an acquisition interface, however, the data volume is huge due to the acquisition track, the house type outline and other related data, and in a related processing mode, the operation cost of the system is increased easily due to the large data volume, so that the interface display is blocked, long waiting time is brought to the user, the acquisition flow of the user is interrupted, and the image acquisition efficiency is greatly reduced.
In view of this, one of the core invention points of the embodiment of the present invention is that when a user performs image acquisition, a terminal may display an acquisition interface, when it is detected that the user triggers image acquisition, an nth spatial acquisition point corresponding to an acquisition operation may be determined, a first acquisition coordinate and a first spatial attribute corresponding to the nth spatial acquisition point may be acquired, then an nth displayed acquisition point and a current house type map corresponding to the nth spatial acquisition point may be displayed in the acquisition interface according to the first acquisition coordinate and the first spatial attribute, and when the user determines an acquisition point in a located space and completes image acquisition, the terminal may determine a display position of the acquisition point in the acquisition interface and a current house type map corresponding to an acquired image according to the acquisition coordinate and the spatial attribute, so that the user may visually know acquisition information and a house type map constructed in real time, then, when the user continues to collect images, the terminal can obtain the real-time spatial position point where the terminal is located and the corresponding real-time position coordinate along with the movement of the terminal, display the collection track connected with the Nth display collection point in the collection interface according to the real-time position coordinate, draw the corresponding collection track in real time through coordinate operation, so that the user can intuitively know the collected moving route, when the user triggers image collection again, the terminal can determine the (N + 1) th spatial collection point corresponding to the collection operation, obtain the collection coordinate and the spatial attribute corresponding to the (N + 1) th spatial collection point, display the (N + 1) th display collection point corresponding to the (N + 1) th spatial collection point on the collection track in the collection interface according to the collection coordinate and the spatial attribute, and update the house type map into a target house type map corresponding to the (N + 1) th display collection point and at least one (N + 1) th display collection point, and the secondary acquisition process is circulated, the acquisition track and the house type graph are continuously updated until the image acquisition is completed, so that the data processing process is effectively simplified in a coordinate operation mode, the system operation load is reduced, the interface display fluency is ensured, and a user can conveniently and visually acquire the area position and the house type outline of the acquisition object in the acquisition process by displaying the acquisition track, the house type graph and the like.
For the convenience of those skilled in the art to better understand the technical solutions of the embodiments of the present invention, the following explains related terms related to the embodiments of the present invention:
the acquisition operation can be the operation related to the acquisition instruction input by the user at the terminal, and the terminal can call the camera to acquire the panoramic image in the space according to the acquisition operation. The first acquisition operation can be a first acquisition operation or an intermediate acquisition operation in the acquisition process; the second acquisition operation is a next operation adjacent to and carrying the first acquisition operation, for example, if the first acquisition operation is a first acquisition operation input by a user, the second acquisition operation may be a second acquisition operation; if the first acquisition operation is the nth acquisition operation, the second acquisition operation may be the (N + 1) th acquisition operation, and the like.
Acquiring coordinates, namely mapping coordinates of a space acquisition point determined by a user in an entity space in an acquisition interface in an image acquisition process, and mapping the position of the terminal in the entity space to the first coordinate system corresponding to the acquisition interface by establishing the first coordinate system corresponding to the acquisition interface; the real-time position coordinates are the same as the acquisition coordinates and are mapping coordinates of the position point of the terminal in the entity space on the acquisition interface, and the difference is that the real-time position coordinates can be coordinates acquired according to preset time duration, and the acquisition coordinates can be coordinates determined when the user triggers image acquisition.
The display coordinates may be positions displayed on the gui, and a second coordinate system corresponding to the gui is established (for example, a rectangular coordinate system is established by a terminal screen, and each pixel may correspond to one unit coordinate).
The space acquisition point can be the actual position of the terminal in the space for image acquisition in the image acquisition process; the real-time spatial location point may be an actual location of the terminal when moving in the space, which varies with a change in the location of the terminal in the space.
Correspondingly, for the space acquisition point, the display position of the space acquisition point on the acquisition interface can be determined in a coordinate mapping and coordinate calculation mode; for the real-time spatial position point, a corresponding acquisition track can be generated in a coordinate mapping and coordinate calculation mode.
It should be noted that, in the following embodiments, two adjacent acquisition points (a spatial acquisition point and a display acquisition point) are taken as an example for illustration, and it is understood that the process may be an image acquisition scene of a first acquisition point and a second acquisition point, and may also include an image acquisition scene of an intermediate acquisition point and an image acquisition scene of a next acquisition point of the intermediate acquisition point, and the like, which is not limited by the present invention.
Specifically, referring to fig. 1, a flow chart of steps of a method for displaying information provided in the embodiment of the present invention is shown, a graphical user interface is provided through a preset terminal, content displayed on the graphical user interface at least includes a collection interface, and specifically, the method may include the following steps:
step 101, responding to a first acquisition operation, determining an Nth spatial acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first spatial attribute corresponding to the Nth spatial acquisition point;
in the embodiment of the present invention, the terminal may run corresponding applications, such as a life application, an audio application, a game application, and the like. The life-type application programs can be further divided according to different types, such as a rental and sale room application program, a home service application program, a leisure and entertainment application program and the like. In the embodiment of the present invention, a renting and selling house application is run on a mobile terminal as an example, and a user may upload corresponding house source information through the renting and selling house application, for example, upload after shooting a panoramic image of a house source, and the like.
Optionally, the panoramic image acquisition is to combine a plurality of images acquired for a target scene into one panoramic image, and the basic acquisition principle is to search edge parts of two images and overlap areas with the closest imaging effect to complete automatic image combination. For example, a certain position point in the target scene is taken as a center, horizontal 360 ° and/or vertical 180 ° acquisition is performed, and a plurality of acquired images of the target scene are combined into one panoramic image, so that panoramic acquisition of the target scene is realized.
As an example, the electronic terminal may be a camera or a mobile terminal having a photographing function, and a rental and sales room application may be included in the mobile terminal, through which panoramic image capturing, management, and the like may be performed. In addition, the mobile terminal and the camera can be connected to realize the collaborative shooting. For convenience of understanding and description, the embodiment of the present invention is described by taking an example that a mobile terminal runs a corresponding application program, and it should be understood that the present invention is not limited thereto.
In a specific implementation, in the process of running an application program, a terminal may display a corresponding acquisition task creation interface, where the acquisition task creation interface may include a plurality of object input controls, and a user may input, through the content input controls, content that needs to be acquired by this acquisition task, and then the terminal determines the corresponding acquisition task according to user operation. Wherein the content input controls may include task identification controls, object selection controls, acquisition height controls, acquisition device controls, and the like, the user can input the task identification of the collection task through the task identification control, select the object to be collected of the collection task through the object selection control, the height of the equipment in the acquisition process is set through the acquisition height control, the type of the equipment in the acquisition process is set through the acquisition equipment control, and the like, for example, the user can input the task identifier of the current collection task as "XX item" through the task identifier control, the object input control is used for selecting a bedroom, a living room, a kitchen and the like as the objects to be acquired by the acquisition task, and the acquisition height of the equipment in the acquisition process can be set to be 1.48 meters and the like through the acquisition height control, which is not limited in the invention.
After the user completes the creation of the collection task, the terminal can display a collection interface in the graphical user interface, the collection interface can comprise a plurality of different interaction controls, and different functions can be executed through the different interaction controls. For example, the interactive control may include an image acquisition control, a time-delay acquisition control, a preview control, an acquisition view angle control, a space object control, and the like, where the image acquisition control may trigger the terminal to acquire a panoramic image, the time-delay acquisition control may trigger the terminal to perform image acquisition with time delay, the preview control may view already acquired content, the acquisition view angle control may select a view angle that needs to perform image acquisition, and the space object control may select a space object that needs to perform image acquisition, and the like.
In addition, can also show a position indication sign in gathering the interface, this position indication sign can be used for instructing the collection direction of predetermineeing the terminal, and the terminal is handed to the user and moves in the space that needs carry out image acquisition in order to look for suitable collection position, along with the removal of terminal, the position indication sign can move, rotate etc. in gathering the interface to the orientation of instruction terminal camera and the relative position of terminal. After the user finds a proper acquisition position in the space, the user can touch an image acquisition control provided in the acquisition interface, and the trigger terminal calls the camera to acquire the panoramic image by taking the position as an original point to obtain the corresponding panoramic image. Specifically, the terminal may determine an nth spatial acquisition point corresponding to an acquisition operation of the user, perform image acquisition on a first target space where the preset terminal is located based on the nth spatial acquisition point, obtain a first target image corresponding to the nth spatial acquisition point, and then identify the first target image to obtain a first spatial attribute of the first target space.
Alternatively, the spatial attributes of the target space are obtained by image recognition, which belongs to the prior art and is not described herein. When the terminal uses the Nth space acquisition point as an origin to perform panoramic image on the first target space, the obtained target image can be subjected to image recognition, so that the spatial attributes of the first target space are obtained, such as the length dimension, the width dimension, the height dimension, the profile, the wall object, the door body object, the window body object, the door wall boundary line between the wall object and the door body object, the window wall boundary line between the wall object and the window body object, the wall boundary line between the wall object and the wall body object and the like, the overall profile, the size and the like of the target space can be determined through the length dimension, the width dimension, the profile and the like, and the determined overall profile can be determined through the wall object, the door body object, the window body object, the door wall boundary line between the wall object and the door body object, the window wall boundary line between the wall object and the window body object, the wall boundary line between the wall object and the wall body object and the like And performing detail optimization (including a relationship between wall objects represented by two adjacent edges in the house type graph, whether a wall object represented by the same edge includes a door body object and/or a wall object, and the like) to obtain a house type graph corresponding to the target space, which is not limited in the present invention.
In an example, for the first acquisition coordinate, a distance mapping relationship may be established, and each time the terminal moves a certain distance, the terminal corresponds to a coordinate value of a certain unit, so that a reference position point may be set in the acquisition interface, and when the terminal first enters the acquisition interface, the reference coordinate of the reference position point is (0,0), and along with the movement of the terminal, the actual movement distance of the terminal and the coordinate corresponding to the acquisition interface may be determined through the distance mapping relationship. For example, assuming that each 20 centimeters corresponds to a unit coordinate, in the moving process, the terminal can acquire pose data and convert the pose data into corresponding coordinates according to the mapping relationship between the pose data and the distance, so that after the nth space acquisition point is selected by the user, coordinate mapping can be performed to obtain the acquisition coordinates of the nth space acquisition point on the acquisition interface.
102, displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current floor plan in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
optionally, in the process of image acquisition, a reference position point may be set in the acquisition interface, for example, referring to fig. 2, which shows a schematic diagram of the acquisition interface provided in the embodiment of the present invention, the acquisition interface 20 is displayed through a graphical user interface of a terminal, the acquisition interface 20 may be divided into 3 regions, the region 210 may be a control region, and the region may include a plurality of interactive controls for executing different functions; the area 220 may be a display area in which a real-time drawn acquisition trajectory, a display acquisition point, a house-type map, and the like may be displayed; region 230 may be an image stream display region in which a real-time image stream captured by a camera may be displayed in real-time. The reference location point may be a center point of the area 220, and the acquisition track, the display acquisition point, the house map, and the like are drawn based on the reference location point, which may be (0, 0). In addition, the reference location point is invisible to the user, i.e., the user cannot browse to the corresponding point in the acquisition interface.
In the specific implementation, after the terminal obtains a first acquisition coordinate of an nth space acquisition point on an acquisition interface and a first space attribute corresponding to the nth space acquisition point through image recognition, a trajectory scaling coefficient and a dragging scaling coefficient for the first acquisition coordinate may be further obtained, then a reference coordinate, the first acquisition coordinate, the trajectory scaling coefficient and the dragging scaling coefficient are adopted to calculate a first display coordinate of the nth space acquisition point on a graphical user interface, the nth display acquisition point corresponding to the nth space acquisition point is displayed in the acquisition interface based on the first display coordinate, and a current floor plan corresponding to the nth space acquisition point is displayed in the acquisition interface according to the first display coordinate and the first space attribute.
The trajectory scaling factor may be a parameter calculated from a relationship between a data boundary coordinate rectangle and a display area rectangle, and is used to ensure complete display of the acquisition trajectory, where the data boundary coordinate rectangle may be a coordinate rectangle of a boundary of a graphical user interface, and the display area rectangle may be an area 220 shown in fig. 2, and in order to ensure that the acquisition trajectory can be completely displayed in the acquisition interface, a suitable scaling factor between the data boundary coordinate rectangle and the display area rectangle needs to be obtained through data calculation, for example, assuming that a screen width of the graphical user interface is 1080 pixels and a width to be displayed is 100 pixels, the trajectory scaling factor may be 10.8; the dragging zoom factor may be a zoom parameter for drawing an overall (acquisition trajectory, user-type graph, etc.) coordinate, and may be used to control the size of the graph, where the dragging zoom factor changes with the change of the display size of the graph, and the dragging zoom factor may be 1 in the case of default display, which is not limited in the present invention.
Specifically, for the first display coordinate, the first display coordinate can be calculated by the following formula:
display coordinate + acquisition coordinate + trajectory scaling factor
The reference coordinates can be used for ensuring that the acquisition track, the house-type figure and the like can be completely displayed in the acquisition interface, the display coordinates of the Nth space acquisition point on the acquisition interface can be obtained through the formula, and then the terminal can display the Nth display acquisition point corresponding to the Nth space acquisition point at the position corresponding to the first display coordinate in the acquisition interface so as to mark the completion of one-time image acquisition.
In addition, the first spatial attribute may include a first contour parameter of the first target space corresponding to the first target image, and the terminal may determine, in the acquisition interface, a plurality of first vertices corresponding to an nth display acquisition point by using the first contour parameter and the first display coordinates while displaying the nth display acquisition point, and obtain vertex display coordinates of the first vertices, and then generate, in the acquisition interface, a current user-type diagram corresponding to the nth display acquisition point by using the vertex display coordinates and the first display coordinates of each first vertex.
Optionally, for a space, which may include a closed wall and a wall with a door, the first spatial attribute further includes a wall object of the first target space and a position of each wall object in the first target space, where the wall object includes at least a first wall object and a second wall object, and the second wall object includes a door object, so that in order to make the door map more matched with the physical space, the terminal may use the vertex display coordinates and the first display coordinates of each first vertex to generate a first door map corresponding to the nth display acquisition point, and match the first wall object with the second wall object and the first door map based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, adjust the display of the first door map, to obtain a second door map, the second floor plan is then displayed in the acquisition interface.
The first contour parameter may include length information, width information, height information, contour information, and the like of the first target space, the figure of the customized drawing on the acquisition interface may be determined through the contour information, and the display size of the figure may be determined through the length information, the width information, and the like. Specifically, the distance mapping relationship may be referred to, and a plurality of first vertices may be determined in the acquisition interface based on the first display coordinates. For example, referring to fig. 3, which shows a schematic diagram of coordinate points provided in the embodiment of the present invention, after a terminal completes panoramic image acquisition and successfully identifies, the terminal may obtain length information, width information, height information, and the like of a corresponding first target space, and may obtain distances between the terminal and each wall of the first target space by combining with an nth space acquisition point, and then may perform coordinate mapping on the distances, the length information, and the width information to obtain vertex display coordinates of multiple vertices corresponding to the first display coordinates, and length information, the width information, and the like mapped to an acquisition interface. Specifically, in the first step of fig. 3, a current collection display point a is displayed; obtaining 4 vertexes (vertex b, vertex c, vertex d and vertex e) corresponding to the current acquisition display point according to the process, wherein the sum of the distance between the current acquisition display point a and the vertex b and the distance between the current acquisition display point a and the vertex d is equal to the length of the first target space mapped to the acquisition interface, and the sum of the distance between the current acquisition display point a and the vertex c and the distance between the current acquisition display point e and the vertex e is equal to the width of the first target space mapped to the acquisition interface; in the third embodiment of fig. 3, after each vertex is determined, the vertex may be used as a ray origin to extend to both sides, respectively, until intersecting with a ray extending from another vertex, and the extension is stopped, thereby constructing a first floor plan; in addition, the first house type graph in the third embodiment is in a closed state, a line representing a closed wall is not determined yet, and which line represents a wall with a door, so that the first house type graph can be matched and displayed based on the position of each wall object in the first target space (such as the position between the wall object and the door body boundary line between the wall object and the door body object, the position between the wall object and the window body boundary line between the wall object and the window body object, the position between the wall object and the door body object, and the like) and the first target space, if the line represents the closed wall, a complete solid line is reserved, if the line represents the wall with the door, a section can be deleted to match the entity space, as in the fourth embodiment of fig. 3, so that a space acquisition point where the terminal is located is obtained, the method and the device map the coordinates into corresponding coordinates, calculate and obtain corresponding display acquisition points, acquisition tracks and house type diagrams, effectively simplify the data processing process in a coordinate operation mode, reduce the system operation load, ensure the smoothness of interface display, and adjust the house type diagrams through space attributes, so that the house type diagrams can express the space outline of the entity space, and a user can conveniently and visually obtain the house type outline of an acquisition object in the acquisition process.
It should be noted that, in the above example, the user-type diagram is exemplarily illustrated as a regular rectangle, it can be understood that, when the user-type diagram is an irregular graph, the terminal may determine corresponding vertices according to a positional relationship between the nth spatial acquisition point in the entity space and each wall, and if there are 5 walls in the entity space, 5 vertices may be determined, and the related process may refer to the foregoing description, which is not limited by the present invention.
103, responding to the movement of a preset terminal, and acquiring a real-time position coordinate of the preset terminal;
in the process of image acquisition, a user can move the handheld terminal after finishing image acquisition of one space acquisition point so as to acquire an image in another entity space or another position of the same entity space. In the process, along with the movement of the terminal, the terminal can respond to the self movement to acquire the real-time spatial position point in the entity space and the real-time position coordinate corresponding to the real-time spatial position point in real time. It should be noted that, reference may be made to the foregoing description for the process of acquiring real-time position coordinates, and details are not described herein again.
104, displaying a collection track connected with the Nth display collection point in the collection interface according to the real-time position coordinate;
in the embodiment of the invention, the terminal can adopt the reference coordinate, the real-time position coordinate, the track zoom coefficient and the dragging zoom coefficient to calculate the real-time display coordinate of the position indication mark on the acquisition interface, then display the position indication mark in the acquisition interface according to the real-time display coordinate, and display the acquisition track connecting the position indication mark and the Nth display acquisition point.
In a specific implementation, different from the acquisition coordinates, the real-time display coordinates can change in real time along with the movement of the terminal, and two adjacent real-time display coordinates can be connected to form an acquisition track of the terminal in the change process, so that the terminal can perform coordinate mapping on the real-time position coordinates through the formula to obtain the real-time display coordinates of the real-time position coordinates corresponding to each acquisition time on the acquisition interface, namely the display position of the position indication mark on the acquisition interface, and connect the real-time display coordinates and the acquisition track displayed on the acquisition interface in the movement process of the terminal.
In an optional embodiment, along with the movement of the terminal, the terminal may obtain a coordinate distance between the real-time position coordinate and a reference coordinate of the reference position point, and if the coordinate distance is greater than or equal to a preset distance threshold, calculate a distance ratio between the coordinate distance and the preset distance threshold, and then perform real-time scaling on the current house type diagram or perform real-time scaling on the target house type diagram by using the distance ratio. Specifically, as the user holds the terminal to move in the corresponding entity space, the acquisition trajectory is also changed all the time, and when the position indication mark moves out of the boundary of the acquisition interface, the terminal can trigger to automatically zoom the currently displayed house type diagram, so as to ensure that the acquisition trajectory and the house type diagram can be completely displayed in the acquisition interface.
Optionally, the preset distance threshold may be a distance value between the reference coordinate point and the boundary of the acquisition interface, and when the coordinate distance between the real-time position coordinate of the position indication identifier and the reference coordinate of the reference position point is greater than the distance threshold, the terminal may use a ratio between the coordinate distance of the real-time position coordinate of the position indication identifier and the reference coordinate of the reference position point and the preset distance threshold as a scaling coefficient to scale the user-type diagram, the acquisition track, and the like currently displayed on the acquisition interface, so as to ensure that the acquisition track and the user-type diagram can be completely displayed on the acquisition interface.
Step 105, responding to a second acquisition operation, determining an N +1 th spatial acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second spatial attribute corresponding to the N +1 th spatial acquisition point;
in the acquisition process, after a user finishes image acquisition of a certain space acquisition point, the user can hold the terminal to move to another entity space or another position of the same entity space for image acquisition, if the user determines a new space acquisition point, second acquisition operation can be input in an acquisition interface, the terminal determines the (N + 1) th space acquisition point corresponding to the second acquisition operation, and panoramic image acquisition is carried out by taking the (N + 1) th space acquisition point as an origin to obtain a second target image. And then, a second target image can be identified, a second space attribute of a second target space where the terminal is located is obtained, and a second acquisition coordinate corresponding to the (N + 1) th space acquisition point on the acquisition interface is obtained at the same time. And the (N + 1) th space acquisition point is the next space acquisition point adjacent to the Nth space acquisition point.
And 106, displaying an (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current house type graph into a target house type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
In specific implementation, the terminal can adopt a reference coordinate, a second acquisition coordinate, a track scaling coefficient and a dragging scaling coefficient, calculate a second display coordinate of the (N + 1) th spatial acquisition point on the graphical user interface, and then display the (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate, so that the display acquisition points for marking different acquisition positions are connected in series through the acquisition track, and a user can visually know the position where image acquisition is completed.
In addition, the second spatial attribute at least includes a second contour parameter of the second target space corresponding to the second target image, and then the terminal may determine a plurality of second vertices corresponding to the (N + 1) th display acquisition point in the acquisition interface by using the second display coordinates and the second contour parameter, then obtain vertex display coordinates of the second vertices, and then update the current house type graph to a target house type graph corresponding to the nth display acquisition point and the (N + 1) th display acquisition point by using the vertex display coordinates of each first vertex, the vertex display coordinates of each second vertex, the first display coordinates, and the second display coordinates.
In an example, the vertex display coordinates and the first display coordinates of each first vertex may be used to determine an initial house type map corresponding to an nth display acquisition point, and based on the initial house type map, by using the coordinate processing manner of the foregoing fig. 3, using the second contour parameters (length information, width information, etc.) and the second display coordinates, determining a plurality of second vertices corresponding to the second display coordinates in the acquisition interface, and obtaining a first house type map corresponding to the second display coordinates based on the second contour parameters (length information, width information, etc.), and then adjusting the display of the first house type map according to the second contour parameters (position of the wall object in the second target space), to obtain a second house type map corresponding to the second display coordinates, and then merging the second house type map and the initial house type map, to obtain an updated target house type map corresponding to the nth display acquisition point and the (N + 1) th display acquisition point, therefore, when a user performs image acquisition once, the house type graph is updated in real time, so that the user can visually and clearly know the outline of the acquired space in the acquisition process, the data processing process is effectively simplified in a coordinate operation mode, the system operation load is reduced, the interface display smoothness is ensured, and the user can visually acquire the region position and the house type outline of the acquired object in the acquisition process by displaying the acquisition track, the house type graph and the like.
In addition, during image acquisition, the user may drag or zoom the acquisition interface to view more details of the displayed acquisition trajectory and user-type diagram.
For the dragging of the acquisition interface, the terminal can respond to the dragging operation aiming at the acquisition interface, determine touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate, then calculate a deviation parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate, and then adjust the display positions of the Nth display acquisition point and the current user-type graph on the acquisition interface or adjust the display positions of the Nth display acquisition point, the (N + 1) th display acquisition point, the acquisition track and the target user-type graph on the acquisition interface by adopting the reference coordinate and the deviation parameter. Specifically, a user can press the acquisition interface for a long time, the terminal can acquire a starting coordinate corresponding to a touch point when the user touches the screen with a finger, when the user inputs a sliding operation on the screen without releasing the finger, the terminal can acquire an ending coordinate corresponding to the touch point of the finger on the screen when the sliding operation is ended, then a difference value between the two coordinates can be used as a deviation parameter, difference value calculation is performed by adopting a reference coordinate and the deviation parameter to obtain a deviation value of the whole display content of the acquisition interface, and the display position of the display acquisition point, the acquisition track, the user-type diagram and the like displayed in the acquisition interface is translated based on the deviation value, so that data operation when the user drags the acquisition interface is simplified through coordinate operation, and the dragging smoothness of the acquisition interface is effectively guaranteed.
For the zooming of the acquisition interface, the terminal can respond to the zooming operation aiming at the acquisition interface, determine the zooming proportion corresponding to the zooming operation, and then zoom the current user-type graph or zoom the acquisition track and the target user-type graph by adopting the zooming coefficient. For example, the zooming operation of the user can be based on the moving size after the double-finger touch, when the user presses the screen by the double-finger touch at the same time, the terminal can obtain an initial coordinate difference between touch points corresponding to an initial two fingers, when the fingers zoom, the terminal can calculate the target coordinate difference after the two fingers zoom in real time, and then the ratio of the two coordinate differences is used as the zoom scale to determine whether the acquisition interface needs to be zoomed in or zoomed out, so that the data operation of the user when the acquisition interface is zoomed in is simplified through the coordinate operation, and the smoothness of the dragging of the acquisition interface is effectively ensured.
It should be noted that, the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, those skilled in the art can also set the embodiments according to actual needs, and the present invention is not limited to this.
In the embodiment of the invention, in the process of image acquisition by a user, when triggering image acquisition, a terminal can determine an Nth space acquisition point and acquire a corresponding acquisition coordinate and a space attribute of a space where the terminal is located, then display a corresponding Nth display acquisition point and a current house type graph in an acquisition interface according to the acquisition coordinate and the space attribute, move along with a user control terminal, the terminal can position a current real-time space position point according to real-time acquisition pose data and acquire a corresponding real-time position coordinate, then display an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate, when triggering image acquisition again by the user, the terminal can determine an N +1 th space acquisition point and acquire a corresponding acquisition coordinate and a space attribute of the space, then display a corresponding N +1 th display acquisition point on the acquisition track according to the acquisition coordinate and the space attribute and update the current house type graph into a position corresponding to each display acquisition point The target family-type graph is obtained until the image acquisition is completed, so that in the image acquisition process, the space acquisition point where the terminal is located is obtained and mapped to the corresponding coordinate, and then the corresponding display acquisition point, acquisition track and family-type graph are obtained through calculation.
In order to make those skilled in the art better understand the technical solutions of the embodiments of the present invention, the following description is made with reference to an example.
Referring to fig. 4, a schematic diagram of the acquisition interface provided in the embodiment of the present invention is shown, where a corresponding application program is run through a terminal, and the acquisition interface 40 is displayed in a graphical user interface, and a position indication identifier 410, a real-time image stream 420, a function control area 430, and the like may be included in the acquisition interface 40. As the user holds the terminal and moves in the physical space, the position indicator 410 may move, turn, and the like in the display area of the acquisition interface along with the movement/turning of the terminal, so as to represent the current position and the acquisition orientation of the terminal; the real-time image stream 420 may be an image stream acquired by a terminal camera in real time; multiple interaction controls may be included in the functionality control area 430, and different functionality may be performed through different interaction controls. Before the user does not execute image acquisition, only the position indication identifier is displayed in the acquisition interface, when the user triggers image acquisition for the first time, referring to fig. 5, the terminal can determine a corresponding space acquisition point, acquire the acquisition coordinate of the space acquisition point on the acquisition interface and identify the acquired panoramic image to obtain the space attribute of a corresponding target space, then calculate a display acquisition point and a house type map corresponding to the first display acquisition point according to the acquisition coordinate, the space attribute, the reference coordinate of the reference coordinate point, a track scaling coefficient, a dragging scaling coefficient and the like, and then display a first display acquisition point 520 and a corresponding house type map 510 in the acquisition interface 50. In addition, the collection interface 50 may further include a collection control window 530, and the collection control window 530 may include a functional space control 5301 (e.g., "restaurant (1/2)", "main horizontal"), a collection point location control 5302 (e.g., optimal point location, etc.), and a point location addition control 5303 (e.g., other point location 1, etc.), so that in the collection process, the user may add a new collection point location through the touch point location addition control 5303, or may select an existing collection point location control 5302 to perform image collection on the collection point location again to replace the previously collected image.
Then, after the user finishes the first image acquisition, the user can hold the terminal to move to the next acquisition point, in the moving process of the terminal, the corresponding real-time coordinate can be obtained in real time, an acquisition track 620 is displayed in the acquisition interface according to the real-time coordinate, the starting point of the acquisition track can be the first acquisition point, in the moving process of the terminal, the position indication mark 610 can move, rotate and the like in the acquisition interface along with the movement of the terminal, and therefore the user can visually and quickly know the current image acquisition condition including the rough outline of the acquisition object, the moving route and the like by displaying the acquisition track, the user-type diagram and the like.
After a plurality of times of collecting operations, the terminal can draw a collecting track corresponding to a collecting task and a house type picture corresponding to all collected panoramic pictures, referring to fig. 7, after a user finishes a plurality of times of image collection, the terminal can display corresponding display and collection points (such as a living room, a toilet, a storage room, a kitchen, a living room and the like) in a collecting interface 70 according to the collecting sequence of the user in sequence, connect the collecting track of each display and collection point and the house type pictures corresponding to all the display and collection points, in the process, the space collecting point where the terminal is located is obtained and mapped into corresponding coordinates, then the corresponding display and collection points, collecting tracks and the house type pictures are obtained through calculation, and the data processing process is effectively simplified through a coordinate operation mode, the system operation load is reduced, the interface display fluency is ensured, and the user can conveniently and visually acquire the region position and the house type outline of the acquisition object in the acquisition process by displaying the acquisition track, the house type graph and the like.
In addition, the user may also drag or zoom the content displayed in the acquisition interface to view more details, and the related implementation process may refer to the foregoing description, which is not described herein again.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 8, a block diagram of a structure of an information display apparatus provided in the embodiment of the present invention is shown, a graphical user interface is provided through a preset terminal, content displayed on the graphical user interface at least includes an acquisition interface, and the information display apparatus may specifically include the following modules:
a first information obtaining module 801, configured to determine, in response to a first acquisition operation, an nth spatial acquisition point corresponding to the first acquisition operation, and obtain a first acquisition coordinate and a first spatial attribute corresponding to the nth spatial acquisition point;
a first content display module 802, configured to display, according to the first acquisition coordinate and the first spatial attribute, an nth display acquisition point corresponding to the nth spatial acquisition point and a current floor plan in the acquisition interface;
a real-time coordinate obtaining module 803, configured to obtain a real-time position coordinate of a preset terminal in response to a motion of the preset terminal;
the acquisition track display module 804 is configured to display an acquisition track connected to the nth display acquisition point in the acquisition interface according to the real-time position coordinate;
a second information obtaining module 805, configured to determine, in response to a second acquisition operation, an N +1 th spatial acquisition point corresponding to the second acquisition operation, and obtain a second acquisition coordinate and a second spatial attribute corresponding to the N +1 th spatial acquisition point;
a second content display module 806, configured to display, according to the second acquisition coordinate and the second spatial attribute, an N +1 th display acquisition point corresponding to the N +1 th spatial acquisition point on the acquisition track in the acquisition interface, and update the current user-type diagram to a target user-type diagram corresponding to the N +1 th display acquisition point and the N +1 th display acquisition point.
In an optional embodiment, the first information obtaining module 801 includes:
the image acquisition submodule is used for acquiring an image of a first target space where the preset terminal is located based on the Nth space acquisition point to obtain a first target image corresponding to the Nth space acquisition point;
and the image identification submodule is used for identifying the first target image and obtaining a first space attribute of the first target space.
In an alternative embodiment, the first content display module 802 comprises:
the reference coordinate acquisition submodule is used for acquiring a reference position point positioned on the acquisition interface and a reference coordinate of the reference position point;
the zooming coefficient acquisition submodule is used for acquiring a track zooming coefficient and a dragging zooming coefficient aiming at the first acquisition coordinate;
the first display coordinate calculation submodule is used for calculating a first display coordinate of the Nth space acquisition point on the graphical user interface by adopting the reference coordinate, the first acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
the first display acquisition point determining submodule is used for displaying the Nth display acquisition point corresponding to the Nth spatial acquisition point in the acquisition interface based on the first display coordinate;
and the first house type graph display sub-module is used for displaying the current house type graph corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute.
In an optional embodiment, the first spatial attribute at least includes a first contour parameter of a first target space corresponding to the first target image, and the first house type diagram display sub-module is specifically configured to:
determining a plurality of first vertexes corresponding to the Nth display acquisition point in the acquisition interface by adopting the first contour parameter and the first display coordinate;
acquiring a vertex display coordinate of the first vertex;
and generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates of the first vertexes and the first display coordinates.
In an optional embodiment, the first spatial attribute further includes wall objects of the first target space and positions of the wall objects in the first target space, where the wall objects include at least a first wall object and a second wall object, and the second wall object includes a door body object; the first house type diagram display sub-module is specifically configured to:
generating a first house type graph corresponding to the Nth display acquisition point by adopting the vertex display coordinates of each first vertex and the first display coordinates;
matching the first wall object and the second wall object with the first house type diagram based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, and adjusting the display of the first house type diagram to obtain a second house type diagram;
and displaying the second floor plan in the acquisition interface.
In an optional embodiment, the acquisition interface includes a position indicator, and the position indicator is used for indicating an acquisition direction of the preset terminal; the acquisition trajectory display module 804 includes:
the real-time display coordinate determination submodule is used for calculating the real-time display coordinate of the position indication mark on the acquisition interface by adopting the reference coordinate, the real-time position coordinate, the track zoom coefficient and the dragging zoom coefficient;
and the acquisition track display submodule is used for displaying the position indication mark in the acquisition interface according to the real-time display coordinate and displaying an acquisition track connecting the position indication mark and the Nth display acquisition point.
In an alternative embodiment, further comprising:
the coordinate distance acquisition module is used for responding to the movement of the preset terminal and acquiring the coordinate distance between the real-time position coordinate and the reference coordinate;
the distance ratio determining module is used for calculating the distance ratio between the coordinate distance and a preset distance threshold if the coordinate distance is greater than the preset distance threshold;
and the first zooming module is used for zooming the current house type graph in real time by adopting the distance ratio or zooming the target house type graph in real time.
In an optional embodiment, the second information obtaining module 805 includes:
the second display coordinate determination submodule is used for calculating a second display coordinate of the (N + 1) th space acquisition point on the graphical user interface by adopting the reference coordinate, the second acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
and the second display acquisition point determining submodule is used for displaying the (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate.
In an alternative embodiment, the second spatial attribute includes at least a second contour parameter of a second target space corresponding to a second target image, and the second content display module 806 includes:
the vertex determining submodule is used for determining a plurality of second vertexes corresponding to the (N + 1) th display acquisition point in the acquisition interface by adopting the second display coordinates and the second contour parameters;
the vertex display coordinate acquisition submodule is used for acquiring the vertex display coordinate of the second vertex;
and the second house type graph determining submodule is used for updating the current house type graph into a target house type graph corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point by adopting the vertex display coordinates of each first vertex, the vertex display coordinates of each second vertex, the first display coordinates and the second display coordinates.
In an alternative embodiment, further comprising:
the touch information determining module is used for responding to dragging operation aiming at the acquisition interface and determining touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate;
the offset parameter determining module is used for calculating an offset parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate;
and the interface translation module is used for adjusting the display positions of the Nth display acquisition point and the current user-type diagram on the acquisition interface or adjusting the display positions of the Nth display acquisition point, the target acquisition point, the acquisition track and the target user-type diagram on the acquisition interface by adopting the reference coordinates and the offset parameters.
In an alternative embodiment, further comprising:
the zooming ratio determining module is used for responding to zooming operation aiming at the acquisition interface and determining the zooming ratio corresponding to the zooming operation;
and the interface zooming module is used for zooming the current user-type graph or zooming the acquisition track and the target user-type graph by adopting the zooming coefficient.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In addition, an embodiment of the present invention further provides an electronic device, as shown in fig. 9, which includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the following steps when executing the program stored in the memory 903:
responding to a first acquisition operation, determining an Nth space acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first space attribute corresponding to the Nth space acquisition point;
displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current floor plan in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
responding to the movement of a preset terminal, and acquiring a real-time position coordinate of the preset terminal;
displaying an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate;
responding to a second acquisition operation, determining an (N + 1) th space acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second space attribute corresponding to the (N + 1) th space acquisition point;
displaying an (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current user type graph into a target user type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
In an optional embodiment, the obtaining the first spatial attribute corresponding to the nth spatial acquisition point includes:
based on the Nth space acquisition point, acquiring an image of a first target space where the preset terminal is located, and acquiring a first target image corresponding to the Nth space acquisition point;
and identifying the first target image to obtain a first space attribute of the first target space.
In an optional embodiment, the displaying, in the acquisition interface, an nth displayed acquisition point corresponding to the nth spatial acquisition point and a current user-type diagram according to the first acquisition coordinate and the first spatial attribute includes:
acquiring a reference position point positioned on the acquisition interface and a reference coordinate of the reference position point;
acquiring a track scaling coefficient and a dragging scaling coefficient aiming at the first acquisition coordinate;
calculating a first display coordinate of the Nth space acquisition point on the graphical user interface by adopting the reference coordinate, the first acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying an Nth display and acquisition point corresponding to the Nth spatial acquisition point in the acquisition interface based on the first display coordinate;
and displaying the current user type graph corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute.
In an optional embodiment, the first spatial attribute at least includes a first contour parameter of a first target space corresponding to a first target image, and the displaying, in the acquisition interface, a current user-type diagram corresponding to the nth spatial acquisition point according to the first display coordinate and the first spatial attribute includes:
determining a plurality of first vertexes corresponding to the Nth display acquisition point in the acquisition interface by adopting the first contour parameter and the first display coordinate;
acquiring a vertex display coordinate of the first vertex;
and generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates of the first vertexes and the first display coordinates.
In an optional embodiment, the first spatial attribute further includes wall objects of the first target space and positions of the wall objects in the first target space, where the wall objects include at least a first wall object and a second wall object, and the second wall object includes a door body object; generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates and the first display coordinates of each first vertex, wherein the method comprises the following steps:
generating a first house type graph corresponding to the Nth display acquisition point by adopting the vertex display coordinates of each first vertex and the first display coordinates;
matching the first wall object and the second wall object with the first house type diagram based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, and adjusting the display of the first house type diagram to obtain a second house type diagram;
and displaying the second floor plan in the acquisition interface.
In an optional embodiment, the acquisition interface includes a position indicator, and the position indicator is used for indicating an acquisition direction of the preset terminal; the displaying, according to the real-time position coordinates, a collection trajectory connecting the nth display collection point in the collection interface includes:
calculating real-time display coordinates of the position indication mark on the acquisition interface by adopting the reference coordinates, the real-time position coordinates, the track scaling coefficient and the dragging scaling coefficient;
and displaying the position indication mark in the acquisition interface according to the real-time display coordinate, and displaying an acquisition track connecting the position indication mark and the Nth display acquisition point.
In an alternative embodiment, further comprising:
responding to the movement of the preset terminal, and acquiring a coordinate distance between the real-time position coordinate and the reference coordinate;
if the coordinate distance is larger than a preset distance threshold, calculating a distance ratio between the coordinate distance and the preset distance threshold;
and adopting the distance ratio to zoom the current house type graph in real time or zoom the target house type graph in real time.
In an optional embodiment, the displaying, on the acquisition trajectory in the acquisition interface, an N +1 th display acquisition point corresponding to the N +1 th spatial acquisition point according to the second acquisition coordinate and the second spatial attribute includes:
calculating a second display coordinate of the (N + 1) th space acquisition point on the graphical user interface by adopting the reference coordinate, the second acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying the (N + 1) th display and acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate.
In an optional embodiment, the second spatial attribute includes at least a second contour parameter of a second target space corresponding to a second target image, and the updating the current user-type diagram to a target user-type diagram corresponding to the nth display acquisition point and the (N + 1) th display acquisition point includes:
determining a plurality of second vertexes corresponding to the (N + 1) th display acquisition point in the acquisition interface by adopting the second display coordinates and the second contour parameters;
acquiring a vertex display coordinate of the second vertex;
and updating the current house type graph into a target house type graph corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point by adopting the vertex display coordinates of the first vertexes, the vertex display coordinates of the second vertexes, the first display coordinates and the second display coordinates.
In an alternative embodiment, further comprising:
responding to dragging operation aiming at the acquisition interface, and determining touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate;
calculating a deviation parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate;
and adjusting the display positions of the Nth display acquisition point and the current indoor graph on the acquisition interface or adjusting the display positions of the Nth display acquisition point, the target acquisition point, the acquisition track and the target indoor graph on the acquisition interface by adopting the reference coordinates and the offset parameters.
In an alternative embodiment, further comprising:
responding to the zooming operation aiming at the acquisition interface, and determining the zooming proportion corresponding to the zooming operation;
and zooming the current user-type graph or zooming the acquisition track and the target user-type graph by adopting the zooming coefficient.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
As shown in fig. 10, in a further embodiment provided by the present invention, there is also provided a computer-readable storage medium 1001 in which instructions are stored, which, when run on a computer, cause the computer to perform the display method of information described in the above-described embodiment.
In yet another embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of displaying information described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (14)

1. A method for displaying information is characterized in that a preset terminal provides a graphical user interface, the content displayed on the graphical user interface at least comprises a collection interface, and the method comprises the following steps:
responding to a first acquisition operation, determining an Nth space acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first space attribute corresponding to the Nth space acquisition point;
displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current floor plan in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
responding to the movement of a preset terminal, and acquiring a real-time position coordinate of the preset terminal;
displaying an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate;
responding to a second acquisition operation, determining an (N + 1) th space acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second space attribute corresponding to the (N + 1) th space acquisition point;
displaying an (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current user type graph into a target user type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
2. The method of claim 1, wherein said obtaining the first spatial attribute corresponding to the nth spatial acquisition point comprises:
based on the Nth space acquisition point, acquiring an image of a first target space where the preset terminal is located, and acquiring a first target image corresponding to the Nth space acquisition point;
and identifying the first target image to obtain a first space attribute of the first target space.
3. The method of claim 1, wherein displaying in the acquisition interface an nth displayed acquisition point corresponding to the nth spatial acquisition point and a current user-type graph according to the first acquisition coordinate and the first spatial attribute comprises:
acquiring a reference position point positioned on the acquisition interface and a reference coordinate of the reference position point;
acquiring a track scaling coefficient and a dragging scaling coefficient aiming at the first acquisition coordinate;
calculating a first display coordinate of the Nth space acquisition point on the graphical user interface by adopting the reference coordinate, the first acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying an Nth display and acquisition point corresponding to the Nth spatial acquisition point in the acquisition interface based on the first display coordinate;
and displaying the current user type graph corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute.
4. The method of claim 3, wherein the first spatial attribute comprises at least a first contour parameter of a first target space corresponding to a first target image, and wherein displaying the current user-type diagram corresponding to the Nth spatial acquisition point in the acquisition interface according to the first display coordinate and the first spatial attribute comprises:
determining a plurality of first vertexes corresponding to the Nth display acquisition point in the acquisition interface by adopting the first contour parameter and the first display coordinate;
acquiring a vertex display coordinate of the first vertex;
and generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates of the first vertexes and the first display coordinates.
5. The method of claim 4, wherein the first spatial attribute further comprises wall objects of the first target space and a location of each of the wall objects in the first target space, the wall objects comprising at least a first wall object and a second wall object, the second wall object comprising a door object; generating the current user-type graph corresponding to the Nth display acquisition point in the acquisition interface by adopting the vertex display coordinates and the first display coordinates of each first vertex, wherein the method comprises the following steps:
generating a first house type graph corresponding to the Nth display acquisition point by adopting the vertex display coordinates of each first vertex and the first display coordinates;
matching the first wall object and the second wall object with the first house type diagram based on the first position of each first wall object in the first target space and the second position of at least one second wall object in the first target space, and adjusting the display of the first house type diagram to obtain a second house type diagram;
and displaying the second floor plan in the acquisition interface.
6. The method according to claim 3, wherein the acquisition interface comprises a position indicator, and the position indicator is used for indicating the acquisition direction of the preset terminal; the displaying, according to the real-time position coordinates, a collection trajectory connecting the nth display collection point in the collection interface includes:
calculating real-time display coordinates of the position indication mark on the acquisition interface by adopting the reference coordinates, the real-time position coordinates, the track scaling coefficient and the dragging scaling coefficient;
and displaying the position indication mark in the acquisition interface according to the real-time display coordinate, and displaying an acquisition track connecting the position indication mark and the Nth display acquisition point.
7. The method of claim 6, further comprising:
responding to the movement of the preset terminal, and acquiring a coordinate distance between the real-time position coordinate and the reference coordinate;
if the coordinate distance is larger than a preset distance threshold, calculating a distance ratio between the coordinate distance and the preset distance threshold;
and adopting the distance ratio to zoom the current house type graph in real time or zoom the target house type graph in real time.
8. The method of claim 3, wherein said displaying an N +1 th display acquisition point corresponding to said N +1 th spatial acquisition point on said acquisition trajectory in said acquisition interface according to said second acquisition coordinate and said second spatial attribute comprises:
calculating a second display coordinate of the (N + 1) th space acquisition point on the graphical user interface by adopting the reference coordinate, the second acquisition coordinate, the track zoom coefficient and the dragging zoom coefficient;
displaying the (N + 1) th display and acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface based on the second display coordinate.
9. The method of claim 3, wherein the second spatial attribute comprises at least a second contour parameter of a second target space corresponding to a second target image, and wherein the updating the current custom map to a target custom map corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point comprises:
determining a plurality of second vertexes corresponding to the (N + 1) th display acquisition point in the acquisition interface by adopting the second display coordinates and the second contour parameters;
acquiring a vertex display coordinate of the second vertex;
and updating the current house type graph into a target house type graph corresponding to the Nth display acquisition point and the (N + 1) th display acquisition point by adopting the vertex display coordinates of the first vertexes, the vertex display coordinates of the second vertexes, the first display coordinates and the second display coordinates.
10. The method of claim 3, further comprising:
responding to dragging operation aiming at the acquisition interface, and determining touch information corresponding to the dragging operation, wherein the touch information comprises a starting coordinate and an ending coordinate;
calculating a deviation parameter corresponding to the dragging operation by adopting the starting coordinate and the ending coordinate;
and adjusting the display positions of the Nth display acquisition point and the current indoor graph on the acquisition interface or adjusting the display positions of the Nth display acquisition point, the target acquisition point, the acquisition track and the target indoor graph on the acquisition interface by adopting the reference coordinates and the offset parameters.
11. The method of claim 3, further comprising:
responding to the zooming operation aiming at the acquisition interface, and determining the zooming proportion corresponding to the zooming operation;
and zooming the current user-type graph or zooming the acquisition track and the target user-type graph by adopting the zooming coefficient.
12. An information display device is characterized in that a preset terminal provides a graphical user interface, the content displayed on the graphical user interface at least comprises a collection interface, and the device comprises:
the first information acquisition module is used for responding to a first acquisition operation, determining an Nth space acquisition point corresponding to the first acquisition operation, and acquiring a first acquisition coordinate and a first space attribute corresponding to the Nth space acquisition point;
the first content display module is used for displaying an Nth display acquisition point corresponding to the Nth spatial acquisition point and a current user-type graph in the acquisition interface according to the first acquisition coordinate and the first spatial attribute;
the real-time coordinate acquisition module is used for responding to the movement of a preset terminal and acquiring the real-time position coordinate of the preset terminal;
the acquisition track display module is used for displaying an acquisition track connected with the Nth display acquisition point in the acquisition interface according to the real-time position coordinate;
the second information acquisition module is used for responding to a second acquisition operation, determining an (N + 1) th space acquisition point corresponding to the second acquisition operation, and acquiring a second acquisition coordinate and a second space attribute corresponding to the (N + 1) th space acquisition point;
and the second content display module is used for displaying the (N + 1) th display acquisition point corresponding to the (N + 1) th spatial acquisition point on the acquisition track in the acquisition interface according to the second acquisition coordinate and the second spatial attribute, and updating the current house type graph into a target house type graph corresponding to the (N + 1) th display acquisition point and the (N + 1) th display acquisition point.
13. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing a program stored on the memory, implementing the method of any of claims 1-11.
14. A computer-readable storage medium having stored thereon instructions, which when executed by one or more processors, cause the processors to perform the method of any one of claims 1-11.
CN202111670093.5A 2021-12-30 2021-12-30 Information display method and device, electronic equipment and storage medium Active CN114416244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111670093.5A CN114416244B (en) 2021-12-30 2021-12-30 Information display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111670093.5A CN114416244B (en) 2021-12-30 2021-12-30 Information display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114416244A true CN114416244A (en) 2022-04-29
CN114416244B CN114416244B (en) 2023-04-07

Family

ID=81271174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111670093.5A Active CN114416244B (en) 2021-12-30 2021-12-30 Information display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114416244B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830162A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Home map display method and device, electronic equipment and storage medium
CN115861039A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Information display method, device, equipment and medium
CN116721237A (en) * 2023-06-09 2023-09-08 北京优贝卡科技有限公司 House type wall editing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110686648A (en) * 2019-09-06 2020-01-14 平安城市建设科技(深圳)有限公司 Method, device and equipment for generating house type graph based on image detection and storage medium
CN110769240A (en) * 2019-08-23 2020-02-07 上海亦我信息技术有限公司 Photographing-based 3D modeling system and method, and automatic 3D modeling device and method
CN111096714A (en) * 2019-12-25 2020-05-05 江苏美的清洁电器股份有限公司 Control system and method of sweeping robot and sweeping robot
CN112907599A (en) * 2021-02-22 2021-06-04 佳木斯大学 Measuring device for indoor design
CN113706338A (en) * 2020-06-22 2021-11-26 天翼智慧家庭科技有限公司 Method for automatically detecting house type structure by using Bluetooth supporting equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769240A (en) * 2019-08-23 2020-02-07 上海亦我信息技术有限公司 Photographing-based 3D modeling system and method, and automatic 3D modeling device and method
CN110686648A (en) * 2019-09-06 2020-01-14 平安城市建设科技(深圳)有限公司 Method, device and equipment for generating house type graph based on image detection and storage medium
CN111096714A (en) * 2019-12-25 2020-05-05 江苏美的清洁电器股份有限公司 Control system and method of sweeping robot and sweeping robot
CN113706338A (en) * 2020-06-22 2021-11-26 天翼智慧家庭科技有限公司 Method for automatically detecting house type structure by using Bluetooth supporting equipment
CN112907599A (en) * 2021-02-22 2021-06-04 佳木斯大学 Measuring device for indoor design

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830162A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Home map display method and device, electronic equipment and storage medium
CN115861039A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Information display method, device, equipment and medium
CN115830162B (en) * 2022-11-21 2023-11-14 北京城市网邻信息技术有限公司 House type diagram display method and device, electronic equipment and storage medium
CN116721237A (en) * 2023-06-09 2023-09-08 北京优贝卡科技有限公司 House type wall editing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114416244B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN114416244B (en) Information display method and device, electronic equipment and storage medium
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US10803659B2 (en) Automatic three-dimensional solid modeling method and program based on two-dimensional drawing
US10530997B2 (en) Connecting and using building interior data acquired from mobile devices
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US11645781B2 (en) Automated determination of acquisition locations of acquired building images based on determined surrounding room data
US20140248950A1 (en) System and method of interaction for mobile devices
US8488040B2 (en) Mobile and server-side computational photography
CN111623755B (en) Enabling automatic measurements
JP7085812B2 (en) Image processing device and its control method
US20170374268A1 (en) Focusing point determining method and apparatus
JP6593922B2 (en) Image surveillance system
CN111986229A (en) Video target detection method, device and computer system
KR20180029690A (en) Server and method for providing and producing virtual reality image about inside of offering
JP6410427B2 (en) Information processing apparatus, information processing method, and program
CN110636204B (en) Face snapshot system
CN115049757B (en) Building information determining method and device and electronic equipment
CN108055456B (en) Texture acquisition method and device
CN107102794B (en) Operation processing method and device
CN113658276B (en) Gun and ball calibration data acquisition method, gun and ball calibration method and device and electronic equipment
CN113890990A (en) Prompting method and device in information acquisition process, electronic equipment and readable medium
CN113592918A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113253622A (en) HomeMap-based network environment visualization control method and system
CN114500831A (en) Prompting method and device in image acquisition process, electronic equipment and storage medium
CN113253623A (en) HomeMap-based air environment visualization control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant