CN109388457B - Multi-scene remote rapid interface interaction method and device - Google Patents

Multi-scene remote rapid interface interaction method and device Download PDF

Info

Publication number
CN109388457B
CN109388457B CN201811118387.5A CN201811118387A CN109388457B CN 109388457 B CN109388457 B CN 109388457B CN 201811118387 A CN201811118387 A CN 201811118387A CN 109388457 B CN109388457 B CN 109388457B
Authority
CN
China
Prior art keywords
interface
image
primitive
remote host
primitives
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811118387.5A
Other languages
Chinese (zh)
Other versions
CN109388457A (en
Inventor
杨立群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811118387.5A priority Critical patent/CN109388457B/en
Publication of CN109388457A publication Critical patent/CN109388457A/en
Application granted granted Critical
Publication of CN109388457B publication Critical patent/CN109388457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to a multi-scene remote quick interface interaction scheme, which comprises the following steps: s100, establishing association between local equipment and a remote host, and providing a plurality of interactive scenes; s200, selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, and extracting and recording the position information of the block object and the self graphic element thereof; s300, following an input instruction of local equipment, calculating transformation parameters of block objects or primitives influenced by the input instruction in an interface image; s400, according to an image library and a buffer area of the local device, performing primitive transformation locally based on a current interface image and a primitive transformation parameter influenced by an input instruction; and S500, fine-tuning the interface images of the fixed area and the dynamic area to synchronize the interface images of the local equipment and the remote host. According to the technical scheme, remote interface interaction is optimized in a targeted mode through multiple scenes, and the operation smoothness is improved.

Description

Multi-scene remote rapid interface interaction method and device
Technical Field
The invention relates to the field of computer communication, in particular to a multi-scene remote rapid interface interaction method and device.
Background
Remote control is a commonly used technique in the field of computer technology. For example, during development and testing of software, a software developer typically needs to install multiple virtual machines on a remote host such as a test server or a mobile terminal. Each virtual machine can run an operating system instance with different environment parameters so as to verify the running condition of the software under various configuration environments. At this time, the tester and the developer can remotely log in the remote host to execute specific test tasks. Based on the technical scheme, the related personnel and the physical equipment do not need to be in the same working place, and the remote login mode can facilitate the related personnel to verify, reproduce and regress various problems in the software development process, thereby bringing convenience to the software development and test.
However, various remote control solutions depend on the network connection status between the local device and the remote host. And after the client accesses the graphical console of the virtual machine through the Internet, returning the refreshed page of the virtual machine to the client. Since the refreshed page needs to be sent back to the client in its entirety, when the network connection status is poor or multiple local devices are connected to the remote host concurrently, a significant network delay will occur on the local devices. This makes the local device not smooth to the operation of the remote host, and the user can not obtain the instruction sent to the remote host from the local device in real time, thereby affecting the actual use effect of the remote control.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a multi-scene quick interface interaction method and device for remote operation, and can achieve the effect of improving the fluency of remote operation of a remote host by local equipment.
The first aspect of the technical scheme of the invention is a multi-scene remote rapid interface interaction method, which comprises the following steps:
s100, establishing interfaces and channels related to interface images between local equipment and a remote host, and providing interactive scenes of a plurality of remote interfaces;
s200, selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, extracting and recording the position information of the block object and the self graphic element thereof, and then acquiring the ID of the corresponding element of the block object and the self graphic element thereof in an image library/buffer area;
s300, following an input instruction of local equipment, calculating the position of a block object or a graphic primitive influenced by the input instruction in an interface image and a scaling transformation parameter;
s400, according to an image library and a buffer area of local equipment, based on a current interface image and a primitive transformation parameter influenced by an input instruction, locally transforming primitives of a dynamic area, and refreshing the interface image;
s500, the local equipment receives the image, the primitive position or the ID data of the image library sent by the remote host, and finely adjusts the interface images of the fixed area and the dynamic area to synchronize the interface images of the local equipment and the remote host.
Further, the step S200 includes: and dividing the block objects in a preset dividing mode according to the selected interface interaction scene and/or the type of the currently running application program, wherein the preset dividing mode comprises a horizontal dynamic area, a column dynamic area, a text separation mode or vector diagram separation mode and a playing image separation mode.
Further, the step S200 includes: when the scene type received by the remote host is selected as the editing type, the remote host directly identifies the characters or the characters and sends the characters or the characters to the local equipment.
Further, the step S200 includes: when the scene type received by the remote host is selected as a browsing type, the remote host transversely or vertically cuts the browsing area image in the interface image based on the line height of the text in the browsing area to form a plurality of objects which can be identified and stored in the image library.
Further, the step S200 further includes: and when the scene type received by the remote host is selected as the graphic drawing type, vectorizing the drawn graphic, storing the vectorized graphic in a buffer area, and transmitting the vectorized graphic and other objects in the interface image to the local equipment.
Further, in the step S200: firstly, the ID of the corresponding element of the whole block object in an image library or a buffer area is identified, if the corresponding element ID cannot be found, the ID of the corresponding element of each graphic element in the block object in the image library or the buffer area is identified; the comparison algorithm comprises a hash comparison algorithm or a neural network comparison identification algorithm.
Further, the step S300 includes: acquiring an event and an action parameter of an input instruction, and judging a block object or a primitive associated with the event of the instruction; the motion parameters are converted into corresponding transformation parameters of the positions and scaling of the block objects or primitives, and the execution time of the motion is configured to be consistent with the transformation time of the block objects or primitives.
Further, in the step S300, the event of inputting the instruction includes key clicking, zooming, dragging, scrolling or focus switching of the application program of the input device.
Further, the step S400 includes: and in the time period of executing the instruction event, acquiring an additional primitive associated with the dynamic area of the current image interface from a local buffer area according to the transformation parameters of the blocking object or the primitive, and adding the additional primitive to perform interface transformation of the dynamic area.
Further, the step S400 includes: if the blocking object or the corresponding graphic element is not obtained to execute interface conversion, the dynamic area of the current interface is kept unchanged when the buffer area is waited to receive the image data of the remote interface.
Further, the step S500 includes: transmitting the command to a remote host, executing a command event remotely, refreshing an interface, and determining the actual position of a corresponding primitive; judging whether the difference values of the positions and the zooming values of the block objects or the primitives of the local equipment and the remote host exceed a preset threshold value or not; if the threshold is exceeded, the primitive position is locally corrected and the primitive is buffered to a buffer zone of the local device to replace the missing primitive in the interface image of the local device.
A second aspect of the present invention is a multi-scenario remote fast interface interaction apparatus, including: the first module is used for establishing interfaces and channels related to interface images between the local equipment and the remote host and providing interactive scenes of a plurality of remote interfaces;
the second module is used for selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, extracting and recording the position information of the block object and the self graphic element thereof, and then acquiring the ID of the corresponding element of the block object and the self graphic element thereof in the image library/buffer area;
the third module is used for following the input instruction of the local equipment and calculating the position of a block object or a graphic primitive influenced by the input instruction in the interface image and the scaling transformation parameter;
the fourth module is used for locally transforming the primitives of the dynamic area based on the current interface image and the transformation parameters of the primitives influenced by the input instruction according to an image library and a buffer area of the local device, and refreshing the interface image;
and the fifth module is used for receiving the image, the primitive position or the ID data of the image library sent by the remote host by the local equipment, finely adjusting the interface images of the fixed area and the dynamic area and synchronizing the interface images of the local equipment and the remote host.
A third aspect of the present invention is a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method as described above.
The invention has the beneficial effects that: when the remote operation meta-computer is implemented, the interface image of the remote host can be blocked according to a specific application scene, so that the image acquisition efficiency is improved; the interface image can be generated in a local priority mode directly based on the instruction of the local input equipment, and the data volume of remote transmission is reduced, so that the operation fluency of interface interaction and the real-time performance of transmission are improved.
Drawings
FIG. 1 shows a general flow diagram of a method according to the invention;
FIG. 2 is a detailed flow chart of a method according to the present invention;
FIG. 3 is a flow chart illustrating the cooperation between a local device and a remote host according to the method of the present invention;
FIG. 4 is a block diagram illustrating the structure between a local device and a remote host according to the method of the present invention;
FIG. 5 is a schematic diagram illustrating interactive scene selection and optimization in accordance with an embodiment of the present invention;
fig. 6-12 are schematic diagrams illustrating various embodiments of operations according to the present invention.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, the multi-scenario remote quick interface interaction method according to the present invention includes the following steps: s100, establishing interfaces and channels related to interface images between local equipment and a remote host, and providing interactive scenes of a plurality of remote interfaces; s200, selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, extracting and recording the position information of the block object and the self graphic element thereof, and then acquiring the ID of the corresponding element of the block object and the self graphic element thereof in an image library/buffer area; s300, following an input instruction of local equipment, calculating the position of a block object or a graphic primitive influenced by the input instruction in an interface image and a scaling transformation parameter; s400, according to an image library and a buffer area of local equipment, based on a current interface image and a primitive transformation parameter influenced by an input instruction, locally transforming primitives of a dynamic area, and refreshing the interface image; s500, the local equipment receives the image, the primitive position or the ID data of the image library sent by the remote host, and finely adjusts the interface images of the fixed area and the dynamic area to synchronize the interface images of the local equipment and the remote host.
The remote host is a remote controlled host in the internet or a local area network, such as a personal computer, a distributed server, a cloud computer, a network controlled terminal, and the like. The remote host may also be a virtualized host that is simultaneously controlled by multiple users logging in. The local device is a master local machine operated by a user, such as a personal computer, a mobile communication terminal, a web master, and the like.
The fixed area refers to an image portion of the remote operation interface that is substantially unchanged at a given time, such as a toolbar image, a taskbar image, and the like showing an application in the operation interface. For example, referring to fig. 6 to 9, the interface portion formed by the images and icons of the upper taskbar and the left taskbar in the application interface is a fixed area.
The dynamic area refers to an image portion that changes frequently in the remote operation interface or changes by user operation, and is, for example, a user browsing area, an editing area, a web page dynamic map, or the like. For example, referring to fig. 6 to 9, in the application program interface, a region in which a user often operates translation, click browsing, and drawing operations is generally defined as a dynamic region.
In some embodiments, the fixed area may also become the dynamic area, for example, when the user drags the taskbar, the taskbar originally belonging to the fixed area may change into the dynamic area; the dynamic area may also be converted to a fixed area, such as in the dynamic area of text browsing of fig. 5, if the user scrolls to read text for a long time, and the background picture of the text does not change or need to be refreshed within a given time, the background of the original dynamic area may be converted to a fixed area.
The blocking object refers to a partial image obtained by segmenting the interface image according to a given dividing mode, and the partial image may include one or more primitives. The given division mode can be transverse cutting, column cutting, character separation, vector diagram separation and the like for the interface image according to the actual scene selected by the user. The primitive refers to a basic image or an icon constituting an interface image, such as an application icon, a file icon, an insertion graph in a webpage, and the like. Further, picture elements (or simply elements) refer to primitives stored in image libraries and buffers. The elements in both the image library and the buffer are configured with IDs.
Referring to fig. 2, the method according to the present invention further comprises the steps of:
s101, acquiring an interface image of a remote host;
s102, defining a dividing mode (such as transverse cutting, column cutting, character separation, vector diagram separation, icon division, image separation and the like) of a block object according to a remote interaction scene (such as an office scene, a character reading scene, a webpage browsing scene, a drawing scene, video or animation playing and the like) selected by a user;
s103, comparing and associating the block object with the existing elements (through a Hash algorithm) in the image library and the buffer area, and if the element is not found in the image library and the buffer area, storing the element as a new element in the image library or temporarily storing the element in the buffer area;
s104, acquiring an event of an input instruction and an action parameter thereof, and judging a primitive associated with the instruction;
s105, converting the action parameters into position/scaling transformation parameters of corresponding primitives, such as pixel position coordinates and scaling offset matrixes of the primitives in subsequent action frames;
s106, in the time period of the instruction event, locally executing the displacement of the graphic element and/or increasing the graphic element from the buffer area by using the conversion parameter;
s107, transmitting the command to a remote host, remotely executing the command event, refreshing an interface, and determining the actual position of the corresponding primitive;
s108, judging whether the local and remote primitives are obviously different (for example, whether the position offset exceeds a threshold value, the primitives are inconsistent, the primitives are lost and the like), if so, executing the next step, otherwise, executing the step S110;
s109, buffering the missing graphic elements, and locally correcting the positions of the graphic elements or adding the graphic elements;
s110, judging whether the current remote control network transmission is idle, if so, executing the next step, otherwise, ending the process;
and S111, synchronizing the image library and/or the buffer area of the remote host and the local equipment, and then ending the process.
Specifically, in step S101, an interface image may be acquired by screenshot. Preferably, the interface image may be called by internal data of the application and the browser. For example, the primitive is obtained by retrieving UI (graphical interface) data in an application currently displayed on the interface; the graphics primitives can also be obtained by retrieving image and text data in a browser webpage cache. Specifically, one or more regions may be demarcated from the interface image of the screenshot by a grid or a primitive outline, and then the change frequency of the regions in a given time is determined, the regions that do not change basically are identified as fixed regions, and the regions operated by the user or the regions with frequent image changes are identified as dynamic regions. Preferably, the areas such as the toolbar and the background image can be directly identified as fixed areas according to the UI data in the application program currently displayed by the interface, and the interface part operated by the user can be used as a dynamic area.
Specifically, in step S102, when the user selects different interactive scenes, the interface (especially the dynamic area) may be divided differently. For example, when an office scene is selected, the interface dynamic area is defined as an office editing window, and a dividing mode of transverse cutting or text background separation is correspondingly selected; when a reading scene is selected, a pure character dividing mode can be adopted; when browsing the scene of the webpage, if the page is scrolled up and down, a crosscut dividing mode can be selected, and if the page is scrolled left and right, a column division dividing mode can be selected; when a drawing scene is selected, a division mode of separating a vector diagram from a background can be selected; when a video or animation playing scene is selected, the playing window can be used as a whole block object to be distinguished from other objects or primitives. Preferably, the horizontal or column division is such that the distance between rows and columns can be matched to the size of the graphical element (e.g. text row segment, height of the grid segment of the web page). The technical scheme of dividing the block objects according to different scenes has higher efficiency and higher speed than the traditional technical scheme of full-page small-grid division. In addition, the scene can be automatically switched according to the type of the application program currently operated by the user. For example, when office software is running, the operation is automatically switched to an office scene, and when a browser program is running, the operation is switched to a web browsing scene.
Specifically, in step S103, the process may further include: retrieving a common image sub-image library established based on the image use frequency in an image library; if no image is retrieved from the common image sub-image library, other images in the image library are retrieved. In this embodiment, it is preferable that the image library periodically empties images whose frequency of use is lower than a preset threshold. In a preferred embodiment, whether the image library and the buffer area have the primitive in the interface corresponding to the element can be retrieved/compared based on a verification mode of the hash function.
Specifically, in step S104, the event of the input device includes a key click, zoom, drag, page scroll, or the like of the input device (e.g., mouse, keyboard, touch screen, or the like), or a focus switch, window open/close, or the like of the application program indirectly generated by an operation of the input device. The action parameters corresponding to the above events include, for example, coordinates of a click position, a zoom ratio parameter, a coordinate offset of a drag point, a scrolling line number, a focus application number, an application state parameter, and the like. In addition, whether a primitive is associated with an instruction may be determined by: 1. the instruction directly acts on the primitive; 2. the primitive is located in a dynamic region and the instruction causes a change in the dynamic region; 3. the instructions directly or indirectly affect the activation or deactivation of the primitives.
Specifically, in step S106, the indexing of primitives and/or adding primitives from the buffer may be performed locally, preferentially off the remote host. And the displacement of the execution primitive is synchronous with the time of the instruction event, so that high responsiveness is kept.
Fig. 3 shows a flow chart of the cooperation between the local device and the remote host according to the method of the invention.
In one embodiment, the interface interaction flow based on user input running at the local device is as follows:
s301, establishing control of local equipment on a remote host;
s302, receiving the action of an input device of the local device, converting the action into instruction data, and then matching the currently selected interactive scene (such as an office scene, a character reading scene, a webpage browsing scene, a drawing scene and the like);
s303, sending the instruction data to a remote host in real time, and synthesizing a local interface by using the transformation parameters of the corresponding graphic elements associated with the instruction and a local image library and a buffer area;
s304, receiving actual data of the remote host, updating and correcting the position of the local block object or the primitive thereof, or supplementing the primitive on an interface, and updating application program data, system operation data and the like of local equipment;
s3041, synchronizing the cache data of the block objects or the primitives thereof with the remote host;
s305, if the remote control is not finished, returning to the step S302, otherwise, executing the following step;
s306, cleaning the cache data of the local device, and synchronizing the image libraries of the local device and the remote host.
Accordingly, the interface interaction flow based on user input of the remote host is as follows:
s211, activating system operation control of the remote host;
s212, monitoring a remote operation port of the terminal input equipment, and recognizing the selected interactive scene;
s213, running and updating corresponding application programs and functions according to the received user instructions and terminal parameters, and generating corresponding interfaces;
s214, synchronously updating application program data of the local equipment and the remote host, and an interface image comprising a fixed area and a dynamic area, and finely adjusting or correcting an interface of the local equipment;
s2141, carrying out cache data synchronization of the block objects or the primitives thereof with the local host;
s215, if the remote control is not finished, returning to the step S212, otherwise, executing the following step;
s216, cleaning the cache data of the local device, and synchronizing the image libraries of the local device and the remote host.
In this embodiment, the updating and interaction of the interface is performed by transferring instructions of an input device (e.g., mouse, keyboard, touch screen, etc.) between the local device and the remote host. For example, when a user clicks an icon by moving a mouse on the local device, firstly, HID data transmitted by mouse hardware is converted into instruction data recognized by an operating system through a driver, and the data is sent to operating system layers of the local device and the remote host; then, directly reflecting that mouse cursor movement and click feedback animation exist on an interface of the local equipment; similarly, the operating system layer in the remote host can update the interface and the mouse track in the remote host according to the instruction data, and determine the icon corresponding to the mouse click coordinate and the application program function corresponding to the icon. Because only a small amount of data at the byte level needs to be transmitted between the local device and the remote host, and the interfaces are updated by utilizing the local resources in the local device and the remote host respectively, the interface interaction method in the embodiment can reduce image transmission data in remote control, and simultaneously provides quick and real-time user operation interaction.
The method according to the present invention may be implemented by a program or a process. The program or process can run in the operating system of the local device or the operating system of the remote host, and can also run in the third-party management device, and is used for monitoring and controlling the work of each interface interaction unit of the local device and the remote host.
In the embodiment shown in FIG. 4, the interface composition unit may be configured to perform primitive composition in the dynamic region; the instruction processing unit is configured to perform instruction transmission and processing in the flow shown in fig. 3; the interface decomposition unit may be configured to perform primitive recognition in the flow illustrated in fig. 2. The scene optimization unit may be configured to configure a partition manner of the blocking object according to the selected interactive scene, and perform optimization adjustment on network transmission between the local device and the remote host, for example, when the device operates in a text reading scene, transmission bandwidth may be appropriately reduced, and when the device operates in a video playing scene, streaming media transmission real-time performance may be improved. The primitive location comparing unit may be configured to determine whether the blocking object or the primitive thereof has an associated element in the image library or the buffer, and if not, may newly add to the image library or the buffer as a new element. The position determination unit is configured to calculate position and size data of the blocking object or of a primitive thereof. The position tracking unit is configured to track the position/scaled position and size of the corresponding primitive according to the action parameter according to the input instruction, for providing the position/scaled transformation parameter for the interface composition unit. Continuing with FIG. 4, it should be understood by those skilled in the art that the remote host may internally produce a virtual interface without displaying the interface in an actual display. For example, using virtualization technology, a remote host may be virtualized out of virtual interfaces for multiple operating systems.
FIG. 5 is a schematic diagram illustrating interactive scene selection and optimization in one embodiment in accordance with the invention. As shown, an interactive scene selection bar may be provided at the interface of the local device for the user to click through various scenes. And the optimal scene can be automatically selected according to the type of the application program currently operated by the user. For different selected scenes, different partitioning modes, cache optimization modes, transmission optimization modes and the like can be triggered. In the embodiment shown in fig. 5, the user selects an office interactive scenario, at this time, before the user opens the folders a-E of desktop icons, the remote host may pre-cache the icons and layout and positions in each folder directory in the folders a-E into the buffer, and since the icons in the folder path are all common elements in the image library, the buffer data between the remote host and the local host only needs the element ID of the corresponding icon (or primitive), and the position data of each folder icon relative to the folder window. At this time, for example, when the user opens the folder icon E, the local device may quickly generate the cached sub-interface of the internal directory of the cached folder E locally, without transmitting the interface screenshot through the remote host. Thus, the user operation can be realized quickly, in real time and accurately.
As shown in fig. 10, in some embodiments, a technician debugs an application on a remote host via a local device, and the technician modifies the program code during the debugging process. The technical personnel selects 'edit' in the scene selection box when editing the code, the remote host computer directly reads the character or literal code input by the technical personnel and sends the character or literal code to the local equipment, the local equipment directly displays the input information on the local interface after receiving the character or literal code information, and because only the literal or character code information needs to be transmitted, the local equipment occupies a minimum memory, so that the technical personnel can display one character or literal in real time when inputting.
As shown in FIG. 11, in some embodiments, a user of the local device browses a remote host control for web pages or word documents, and the user selects "browse" in a scene selection box. During the interface interaction the remote host runs an image capture program, shown in dashed lines in fig. 7, which segments the image of the viewed area 81 into a plurality of identifiable objects according to the line height of the text in the viewed area, and stores the objects in an image library. Further, during operation, the system preferentially identifies objects formed transversely in the interface image through the neural network and calculates the coordinates of the objects. When the user drags the right bar control 82, the remote host system calculates the coordinates and offsets of a plurality of objects formed by transverse cutting in the browsing area, the system sends all object information, coordinate data and offset data in the current interface image to the local equipment, the local equipment extracts elements corresponding to the current object from the local image library after receiving the information data, and synthesizes the interface image after the user drags the browsing area.
As shown in fig. 12, in some embodiments, a user of the local device controls the remote host to run graphics drawing software, selecting "draw" in the scene selection box. As shown, a user draws a geometry on the graphics drawing software, and the host system vectorizes and stores the geometry in a buffer, stores the vectorized geometry as an object in an image library, and records coordinates of the geometry. Then after the local device synchronizes the buffer area and the image library data, reading the vector diagram of the buffer area to restore the graph in the host interface, and finally extracting the corresponding elements of the image library to synthesize the interface image of the remote host.
Fig. 6-9 are schematic diagrams illustrating application scenarios according to various embodiments of the present invention. The left interface is a local interface and the right interface is a virtual interface of the remote host.
Referring to fig. 6, when the user selects to browse an interactive scene, in the remote host, the graphics primitives including the text in the dynamic region may be converted into text data for direct transmission to the local device, and a horizontal object splitting manner is adopted when browsing non-text web page pictures. For example, when a user operates the local device to browse a web page and perform a mouse page-turning operation, the local device directly reads the text data cached in the buffer area or the split image of the horizontal segment, and then directly updates the text primitive in the dynamic area of the interface of the local device. And the remote host receives the page turning operation instruction of the mouse and processes the page turning operation in the browsing application program in the remote host, so that the local equipment and the remote host interface can be synchronized, but the picture of the whole browsing interface is not required to be transmitted. In addition, as shown in fig. 7, a column-wise object splitting mode can be adopted, so that interface conversion is softer and jamming is reduced when the traversing browsing operation is adapted.
Referring to fig. 8, a user drags a folder by a mouse while a local device operates a browsing resource manager. In this embodiment, a coordinate system may be established for the screenshot of the interface, and the coordinate system calculates the coordinates (x, y) of the identified mouse object with the lower left corner of the image as an origin or with the center of the image as the origin. Therefore, in each motion frame, the coordinates of the mouse operated by the user may be tracked, for example, if the coordinates of the corresponding mouse object in the interface image of the previous frame are (x2, y2), and the coordinates of the corresponding mouse object in the interface image captured this time are (x1, y1), the offset D (dx, dy) of the corresponding coordinates is (x1-x2, y1-y 2). The primitives for translating the folder are therefore directed according to the offset D, whereby the dynamic effect of mouse dragging the folder can be generated locally with priority. In addition, the user creates a new folder while browsing the explorer. If the newly created folder has a custom icon that is not stored in the image library and buffer, the icon is separately transmitted and synchronized in the local device and remote host and then stored in the buffer. The icon may also be converted to an element that is maintained in the image library. Therefore, when the user browses the path of the folder next time, the corresponding graphic elements are exported from the image library or the buffer area of the local equipment and are superposed on the existing interface only by adopting a mapping mode.
Referring to fig. 9, a user operating a mouse can switch between windows of a plurality of applications to a currently operating window. For example, it may be determined whether a pointer hotspot of the mouse clicks on a specific window according to the click position of the mouse pointer, so that the window is switched to the uppermost layer of the interface to be displayed. Since the windows of multiple applications are cached, the windows of the applications which are opened or hidden in the background can be quickly switched locally. Preferably, the interactive scene can also be automatically switched according to the switched application program, for example, when the drawing application program window is switched to the text application program window, the drawing scene can be switched to the text scene (as shown in fig. 9).
It should be recognized that the above-described embodiments may be implemented or realized in computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the above-described methods may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above-described embodiments are merely illustrative of implementations set forth for a clear understanding of the principles of the invention. Many changes, combinations, modifications, or equivalents may be substituted for elements thereof without departing from the scope of the invention. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (13)

1. A multi-scene remote quick interface interaction method is characterized by comprising the following steps:
s100, establishing interfaces and channels related to interface images between local equipment and a remote host, and providing interactive scenes of a plurality of remote interfaces;
s200, selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, extracting and recording the position information of the block object and the self graphic element thereof, and then acquiring the ID of the corresponding element of the block object and the self graphic element thereof in an image library/buffer area;
s300, following an input instruction of local equipment, calculating the position of a block object or a graphic primitive influenced by the input instruction in an interface image and a scaling transformation parameter;
s400, according to an image library and a buffer area of local equipment, based on a current interface image and a primitive transformation parameter influenced by an input instruction, locally transforming primitives of a dynamic area, and refreshing the interface image;
s500, the local equipment receives the image, the primitive position or the ID data of the image library sent by the remote host, and finely adjusts the interface images of the fixed area and the dynamic area to synchronize the interface images of the local equipment and the remote host;
wherein the step S200 includes: selecting an interactive scene by a user, automatically switching scenes according to the type of an application program, and matching the currently selected interactive scene;
wherein the step S500 includes:
transmitting the command to a remote host, executing a command event remotely, refreshing an interface, and determining the actual position of a corresponding primitive;
judging whether the difference values of the positions and the zooming values of the block objects or the primitives of the local equipment and the remote host exceed a preset threshold value or not;
if the threshold is exceeded, the primitive is locally modified, including: and correcting the primitive offset and buffering the primitives to a buffer zone of the local device to replace missing primitives and primitives inconsistent with the remote host in the interface image of the local device.
2. The method of claim 1, wherein the blocking object is a partial image obtained by dividing the interface image according to a given division manner, and the partial image may contain one or more primitives.
3. The method according to claim 1, wherein the step S200 comprises:
and dividing the block objects in a dividing mode corresponding to the interactive scene according to the types of the application programs clicked by the user or currently running, wherein the dividing mode comprises a horizontal dynamic area, a column dynamic area, a character separation mode, a vector diagram separation mode or a playing image separation mode.
4. The method according to claim 1 or 3, wherein the step S200 comprises: when the scene type received by the remote host is selected as the editing type, the remote host directly identifies the characters or the characters and sends the characters or the characters to the local equipment.
5. The method according to claim 1 or 3, wherein the step S200 comprises: when the scene type received by the remote host is selected as a browsing type, the remote host transversely or vertically cuts the browsing area image in the interface image based on the line height of the text in the browsing area to form a plurality of objects which can be identified and stored in the image library.
6. The method according to claim 1 or 3, wherein the step S200 further comprises: and when the scene type received by the remote host is selected as the graphic drawing type, vectorizing the drawn graphic, storing the vectorized graphic in a buffer area, and transmitting the vectorized graphic and other objects in the interface image to the local equipment.
7. The method according to claim 1, wherein the step S300 comprises:
acquiring an event and an action parameter of an input instruction, and judging a block object or a primitive associated with the event of the input instruction;
the motion parameters are converted into corresponding transformation parameters of the positions and scaling of the block objects or primitives, and the execution time of the motion is configured to be consistent with the transformation time of the block objects or primitives.
8. The method of claim 7, wherein associating a blocking object or primitive with an instruction comprises: the instruction directly acts on the block object or the graphic element; the blocking object or primitive is located in a dynamic region and the instruction causes a change in the dynamic region; the instructions directly or indirectly affect the activation or deactivation of the blocking objects or primitives.
9. The method according to claim 1, wherein the step S400 comprises:
and in the time period of executing the instruction event, acquiring an additional primitive associated with the dynamic area of the current image interface from a local buffer area according to the transformation parameters of the blocking object or the primitive, and adding the additional primitive to perform interface transformation of the dynamic area.
10. The method according to claim 1, wherein the step S400 further comprises: the relocation of primitives is performed locally off the remote host and/or primitives are added from the buffer.
11. A multi-scenario remote quick interface interaction apparatus for implementing the multi-scenario remote quick interface interaction method according to any one of claims 1-10, comprising:
the first module is used for establishing interfaces and channels related to interface images between the local equipment and the remote host and providing interactive scenes of a plurality of remote interfaces;
the second module is used for selecting a corresponding interactive scene according to the current operation, adjusting the partition mode of the block object, identifying the block object in the dynamic area at the remote host through a comparison algorithm, extracting and recording the position information of the block object and the self graphic element thereof, and then acquiring the ID of the corresponding element of the block object and the self graphic element thereof in the image library/buffer area;
the third module is used for following the input instruction of the local equipment and calculating the position of a block object or a graphic primitive influenced by the input instruction in the interface image and the scaling transformation parameter;
the fourth module is used for locally transforming the primitives of the dynamic area based on the current interface image and the transformation parameters of the primitives influenced by the input instruction according to an image library and a buffer area of the local device, and refreshing the interface image;
and the fifth module is used for receiving the image, the primitive position or the ID data of the image library sent by the remote host by the local equipment, finely adjusting the interface images of the fixed area and the dynamic area and synchronizing the interface images of the local equipment and the remote host.
12. The apparatus of claim 11, further comprising:
a primitive positioning comparison unit: the unit is configured to determine whether the block object or the graphic primitive thereof has an associated element in the image library or the buffer area, and if the block object or the graphic primitive thereof has no associated element, the block object or the graphic primitive thereof is newly added into the image library or the buffer area as a new element;
a position determination unit: the unit is configured to calculate position and size data of a blocking object or a primitive thereof;
a position tracking unit: the unit is configured to track the position/scaled position and size of the corresponding primitive according to the action parameter according to the input instruction for providing the position/scaled transformation parameter to the interface composition unit.
13. A computer-readable storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 10.
CN201811118387.5A 2018-09-21 2018-09-21 Multi-scene remote rapid interface interaction method and device Active CN109388457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811118387.5A CN109388457B (en) 2018-09-21 2018-09-21 Multi-scene remote rapid interface interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811118387.5A CN109388457B (en) 2018-09-21 2018-09-21 Multi-scene remote rapid interface interaction method and device

Publications (2)

Publication Number Publication Date
CN109388457A CN109388457A (en) 2019-02-26
CN109388457B true CN109388457B (en) 2022-02-25

Family

ID=65418134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811118387.5A Active CN109388457B (en) 2018-09-21 2018-09-21 Multi-scene remote rapid interface interaction method and device

Country Status (1)

Country Link
CN (1) CN109388457B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113360150B (en) * 2021-05-25 2024-04-26 广东海启星海洋科技有限公司 Multi-module data linkage display method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447998A (en) * 2008-12-25 2009-06-03 广东威创视讯科技股份有限公司 Desktop sharing method and system
CN103069453A (en) * 2010-07-05 2013-04-24 苹果公司 Operating a device to capture high dynamic range images
CN108304239A (en) * 2018-01-26 2018-07-20 杨立群 For remote-operated quick interface exchange method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2144421A1 (en) * 2008-07-08 2010-01-13 Gemplus Method for managing an access from a remote device to data accessible from a local device and corresponding system
CN103677539B (en) * 2012-09-18 2017-11-17 阿里巴巴集团控股有限公司 Interface method of adjustment and device
FR3007860A1 (en) * 2013-06-27 2015-01-02 France Telecom METHOD FOR INTERACTING BETWEEN A DIGITAL OBJECT, REPRESENTATIVE OF AT LEAST ONE REAL OR VIRTUAL OBJECT LOCATED IN A REMOTE GEOGRAPHICAL PERIMETER, AND A LOCAL SCANNING DEVICE
CN107153498B (en) * 2016-03-30 2021-01-08 斑马智行网络(香港)有限公司 Page processing method and device and intelligent terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447998A (en) * 2008-12-25 2009-06-03 广东威创视讯科技股份有限公司 Desktop sharing method and system
CN103069453A (en) * 2010-07-05 2013-04-24 苹果公司 Operating a device to capture high dynamic range images
CN108304239A (en) * 2018-01-26 2018-07-20 杨立群 For remote-operated quick interface exchange method and device

Also Published As

Publication number Publication date
CN109388457A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
US11301200B2 (en) Method of providing annotation track on the content displayed on an interactive whiteboard, computing device and non-transitory readable storage medium
CN109298806B (en) Remote quick interface interaction method and device based on object recognition
US10782844B2 (en) Smart whiteboard interactions
US10424341B2 (en) Dynamic video summarization
US10453240B2 (en) Method for displaying and animating sectioned content that retains fidelity across desktop and mobile devices
CN108304239B (en) Rapid interface interaction method and device for remote operation
CN110378063B (en) Equipment deployment method and device based on intelligent building space and electronic equipment
US9912724B2 (en) Moving objects of a remote desktop in unstable network environments
EP0834801A2 (en) Remote control of a display system
US11120197B2 (en) Optimized rendering of shared documents on client devices with document raster representations
KR101399472B1 (en) Method and apparatus for rendering processing by using multiple processings
EP2950274B1 (en) Method and system for generating motion sequence of animation, and computer-readable recording medium
US10147400B2 (en) Display control device, display control method, and display control program
KR20160003683A (en) Automatically manipulating visualized data based on interactivity
US20220284377A1 (en) Method and apparatus for task group positioning
CN105446676B (en) Carry out the method and device that large-size screen monitors are shown
CN110286971B (en) Processing method and system, medium and computing device
CN109388457B (en) Multi-scene remote rapid interface interaction method and device
KR101771473B1 (en) Method and apparatus for generating responsive webpage
KR101771475B1 (en) Method and apparatus for generating responsive webpage
CN115544311A (en) Data analysis method and device
US11157130B2 (en) Cursor-based resizing for copied image portions
KR101399473B1 (en) Method and apparatus for rendering processing by using multiple processings
US11907646B1 (en) HTML element based rendering supporting interactive objects
EP3635527B1 (en) Magnified input panels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant