CN118295897A - Method and device for testing application interface, computer equipment and storage medium - Google Patents

Method and device for testing application interface, computer equipment and storage medium Download PDF

Info

Publication number
CN118295897A
CN118295897A CN202310012600.9A CN202310012600A CN118295897A CN 118295897 A CN118295897 A CN 118295897A CN 202310012600 A CN202310012600 A CN 202310012600A CN 118295897 A CN118295897 A CN 118295897A
Authority
CN
China
Prior art keywords
interface
screen
operated
container
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310012600.9A
Other languages
Chinese (zh)
Inventor
曹丰斌
梁哲
陈曦
杨传华
肖旭章
刘猛
王振宇
董鹏
李鑫
李晴
刘占勇
胡中
周雪瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310012600.9A priority Critical patent/CN118295897A/en
Publication of CN118295897A publication Critical patent/CN118295897A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a test method, a test device, a computer device, a storage medium and a computer program product of an application interface. The method comprises the following steps: switching an original context of an application interface of an application to be tested into a network context; determining the position of a view container in the application interface in a screen; responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context; determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position. The method can improve the interface test efficiency and ensure the operation accuracy of the interface elements.

Description

Method and device for testing application interface, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for testing an application interface, a computer device, a storage medium, and a computer program product.
Background
With the development of computer technology and internet technology, the testing of interface operation falls to the ground in the application of different business scenes, so that various types of automatic testing technology is widely focused on by the public. A common UI (User Interface) automation test engine can support UI automation tests of multiple platforms at the same time.
However, in the current test mode of the application interface, element searching is generally performed by a web (web page) context mode, when element operation is required, a mode of switching to a native (original) context is called, a mobile terminal gesture operation method is called, and then a mode of switching to a web context is used for performing subsequent operation.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, a computer readable storage medium, and a computer program product for testing an application interface, which can improve the efficiency of interface testing and ensure the operation accuracy of interface elements.
In a first aspect, the present application provides a method for testing an application interface. The method comprises the following steps: switching an original context of an application interface of an application to be tested into a network context; determining the position of a view container in the application interface in a screen; responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context; determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position.
In a second aspect, the application further provides a testing device of the application interface. The device comprises: the switching module is used for switching the original context of the application interface of the application to be tested into the network context; a determining module, configured to determine a position of a view container in the application interface in a screen; the searching module is used for responding to the triggering operation of the test case in the view container simulation and searching the position of the element to be operated in the view container based on the network context; the determining module is further used for determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and the input module is used for inputting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the input position.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of: switching an original context of an application interface of an application to be tested into a network context; determining the position of a view container in the application interface in a screen; responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context; determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of: switching an original context of an application interface of an application to be tested into a network context; determining the position of a view container in the application interface in a screen; responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context; determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of: switching an original context of an application interface of an application to be tested into a network context; determining the position of a view container in the application interface in a screen; responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context; determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position.
The method, the device, the computer equipment, the storage medium and the computer program product for testing the application interface are characterized in that the original context of the application interface of the application to be tested is switched into the network context, and the position of a view container in the application interface in a screen is determined; responding to the triggering operation of the test case in the view container simulation, searching the position of the element to be operated in the view container based on the network context, and determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into the element operation interface so that the element operation interface executes interface operation at the transmitted position. The original context of the application interface of the application to be tested can be switched to the network context, and the position of the element to be operated in the view container is searched based on the network context, so that the accurate position of the element to be operated in the screen can be determined based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, and the accurate position of the element to be operated in the screen is transmitted to the element operation interface, so that the element operation interface performs interface operation at the transmitted position, the accurate searching capability of the network context can be utilized, frequent operation of switching the context is effectively avoided, the accurate operation of the application interface element is realized on the premise that the context is not required to be frequently switched, and the accuracy of the interface element operation can be ensured while the interface test efficiency is effectively improved.
Drawings
FIG. 1 is an application environment diagram of a test method of an application interface in one embodiment;
FIG. 2 is a flow chart of a method for testing an application interface in one embodiment;
FIG. 3 is a schematic diagram of execution logic of a UI test case in one embodiment;
FIG. 4 is a schematic diagram of an application interface of an application under test in one embodiment;
FIG. 5 is a schematic diagram of an application interface for determining location information of an element to be manipulated in a view container in one embodiment;
FIG. 6 is a diagram of a native context view structure in one embodiment;
FIG. 7 is a schematic diagram of a web context view structure in one embodiment;
FIG. 8 is a flow diagram of a step of determining a location of an element to be manipulated in a screen based on the location of the view container in the screen, the canvas size of the view container, and the location of the element to be manipulated in the view container in one embodiment;
FIG. 9 is a diagram of a view structure derived using a web context in one embodiment;
FIG. 10 is a schematic overall flow diagram of a method for implementing accurate operation of a web page in a UI automation scenario based on mobile-side native platform gestures in one embodiment;
FIG. 11 is a block diagram of a testing device of an application interface in one embodiment;
fig. 12 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
Cloud technology (Cloud technology) is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by Cloud computing business models, and can form a resource pool, so that the Cloud computing business model is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
With research and progress of artificial intelligence technology, research and application of artificial intelligence technology are being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, autopilot, unmanned, robotic, smart medical, smart customer service, car networking, autopilot, smart transportation, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and will be of increasing importance.
It should be noted that in the following description, the terms "first, second and third" are used merely to distinguish similar objects and do not represent a specific order for the objects, it being understood that the "first, second and third" may be interchanged with a specific order or sequence, if allowed, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
The test method of the application interface provided by the embodiment of the application can be applied to the application environment shown in the figure 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 may switch the original context of the application interface of the application to be tested in the terminal 102 to a network context, and determine the position of the view container in the application interface in the screen; the server 104 responds to the triggering operation of the test case in the view container simulation, and searches the position of the element to be operated in the view container based on the network context; further, the server 104 determines the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container, and the position of the element to be operated in the view container, and the server 104 inputs the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs the interface operation at the input position.
The terminal 102 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, an internet of things device, and a portable wearable device, and the internet of things device may be a smart speaker, a smart television, a smart air conditioner, and a smart vehicle device. The portable wearable device may be a smart watch, smart bracelet, headset, or the like.
The server 104 may be a separate physical server or may be a service node in a blockchain system, where a Peer-To-Peer (P2P) network is formed between service nodes, and the P2P protocol is an application layer protocol that runs on top of a transmission control protocol (TCP, transmission Control Protocol) protocol.
The server 104 may be a server cluster formed by a plurality of physical servers, and may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
The terminal 102 and the server 104 may be connected by a communication connection manner such as bluetooth, USB (Universal Serial Bus ) or a network, which is not limited in this disclosure.
In one embodiment, as shown in fig. 2, a method for testing an application interface is provided, where the method may be executed by a server or a terminal alone or may be executed by the server and the terminal together, and the method is applied to the server in fig. 1, and is described by taking the example as an example, and includes the following steps:
step 202, switching the original context of the application interface of the application to be tested into the network context.
The application to be tested refers to an application program to be tested, and the application program to be tested can comprise a plurality of different types of application programs, for example, the application to be tested can comprise an audio-visual application, a transaction application, a game application and the like. The application to be tested in the application can be Web (webpage) application which is developed through HTML5 (hypertext 5.0, HTML5 is a language description mode for constructing Web content and runs on mobile terminal equipment, for example, the application to be tested is an application interface embedded in a WebView mobile terminal browser control in a mobile terminal native application program.
The application interface refers to a display interface corresponding to the application to be tested, for example, when the application to be tested is a transaction application, the application interface of the application to be tested may be a display interface for providing transaction information.
The original context refers to page structure information of an application Interface of an application to be tested, the original context can also be called Native context, the original context in the application can be a UI automation page context obtained when an automatic test engine tests a mobile terminal Native application (application software) by using a UI (User Interface), and a view structure of the original context is a document in a layer of XML format.
The network context refers to another page structure information of an application interface of an application to be tested, the network context can also be called as a Web context, the network context in the application can be the UI automation page context obtained when a UI automation test engine is used for testing a page embedded in a Web View mobile terminal browser control in a mobile terminal native App, and the view structure of the Web context is a native HTML (Hyper Text Markup Language ) tag.
Specifically, in a UI test case scenario, when executing a certain test case, the server may run the test case through the UI automation test engine and obtain an application interface of an application to be tested in the test case, further, the server may obtain an original context of the application interface of the application to be tested through the UI automation test engine and switch the original context of the application interface of the application to be tested to a network context, so as to obtain the network context of the application interface of the application to be tested.
For example, as shown in fig. 3, a schematic diagram of execution logic of the UI test case is shown. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, that is, the server may run the test case through a UI automation test engine, for example, through Appium Server (a UI automation test engine), and obtain an application interface of an application to be tested in the test case shown in fig. 3, that is, an OEM security home page, that is, the server may determine an address and window information of the application to be tested through Appium Server, and obtain an OEM security home page of the application to be tested according to the determined address and window information, further, the server may obtain a Native context of the OEM home page through the UI automation test engine, that is, appium Server, and switch the Native context of the OEM home page to a web context, so as to obtain a web context corresponding to the security OEM home page.
Step 204, determining a position of a view container in an application interface in a screen.
The view container refers to a control in the application interface, for example, as shown in fig. 4, is an application interface schematic diagram of the application to be tested. In the application interface shown in fig. 4, the view container may refer to the webView container portion shown in dashed line.
The screen refers to a display screen of the device under test, for example, if the application under test is displayed on the mobile terminal device, the screen refers to a screen of the mobile terminal device, for example, in the application interface shown in fig. 4, the screen refers to a display screen of the mobile terminal device, that is, a portion shown by a bold black line box in fig. 4 is a display screen area of the mobile terminal device.
The position refers to the actual position of the view container in the screen and may also be referred to as absolute position. For example, the view container in the present application may be a rectangular area within a dashed box in the display screen shown in fig. 4, and the position of the view container in the screen may be represented by a coordinate position at a diagonal vertex of the rectangular area.
Specifically, after the server obtains the original context of the application interface of the application to be tested through the UI automation test engine and switches the original context of the application interface of the application to be tested to the network context, the server may determine the position of the view container in the application interface in the screen, for example, the server may determine the position of the view container in the application interface in the screen through the automation test tool.
For example, as shown in fig. 4, an application interface of an application to be tested is shown. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server may obtain the Native context of the OEM security front page through the UI automation test engine Appium Server, and switch the Native context of the OEM security front page to a web context, so as to obtain the web context corresponding to the OEM security front page, and then the server may determine the position of the view container in the application interface in the screen through the UI automation test engine Appium Server, that is, the server may determine the coordinate position of the webView container in the OEM security front page in the screen through the UI automation test engine, so as to obtain the coordinate position of the webView container in the screen as the upper left corner vertex a (0,275) and the lower right corner vertex B (1080,1885), that is, in the present application, two vertex coordinates at the diagonal vertex of the rectangular area corresponding to the webView container may be used as the coordinate position of the webView container in the screen.
It will be appreciated that the method of determining the position of the view container in the application interface in the screen according to the present application includes, but is not limited to, by means of coordinates at the diagonal vertices, and may also be other customized manners, for example, coordinates of a center point of a rectangular area corresponding to the container webView, and length values and width values of the rectangular area may also be used as coordinate positions of the container webView in the screen.
In step 206, in response to the trigger operation of the test case in the view container simulation, the position of the element to be operated in the view container is searched based on the network context.
The triggering operation refers to an operation triggered by a user in an application interface, and the triggering operation of the user can trigger a triggering event in the terminal, namely, a screen Input event (Input event). The triggering event may include a click event (click event), a touch event (touch event), a touch event (tap event), a swipe event (swipe event), etc., that is, the user may perform different triggering operations on the terminal device, for example, the triggering operations in the present application include, but are not limited to, a click operation, a swipe operation, a long press operation, and a pan and tilt. Meanwhile, the triggering operation in the application can also be the triggering operation for each element contained in the application interface, for example, clicking and sliding a certain element in the application interface.
The simulated triggering operation refers to automatically simulating the behavior operation of the gesture of the user to trigger the corresponding operation in the process of executing the test case, for example, the server can run the test case through an automatic test tool to simulate various triggering operations of the user.
The element to be operated refers to an element corresponding to the simulated trigger operation, for example, the simulated trigger operation in the application is an element operation triggered by an element A, and the element A is the element to be operated.
The position of the element to be operated in the view container refers to position information of the element to be operated in the view container, and the position of the element to be operated in the view container may also be referred to as a relative position of the element to be operated in the view container. The position of the element to be operated in the view container in the present application may refer to the element center coordinate position of the element to be operated in the view container.
Specifically, after the server determines the position of the view container in the application interface in the screen, in the process that the server executes the test case, the test case can simulate the triggering operation of the user in the view container, then the server searches the position of the element to be operated in the view container corresponding to the triggering operation based on the determined network context in response to the triggering operation of the test case in the view container, namely, the server determines the element to be operated in response to the triggering operation of the test case in the view container, searches the related information of the element to be operated in the determined network context, and further, the server can determine the position information of the element to be operated in the view container based on the searched related information of the element to be operated.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server determines, through the UI automation test engine, the coordinate position of the webView container in the OEM security home page in the screen, and obtains the coordinate position of the webView container in the screen as the upper left corner vertex a (0,275) and the lower right corner vertex B (1080,1885), and in the process that the server executes the test case shown in fig. 3, the test case may simulate a triggering operation of a user in the view container, for example, the test case may simulate a sliding operation of an element triggered by the user in the view container, that is, simulate a "user sliding" in which the test case performs a "when sliding" in which the element to be operated is determined to be a "creation board" in the view container, and, in the context of the determined web, the server may find relevant information of the element "creation board" based on the found "creation board" in which the element is determined to be a "in the position of the container" webView "when the test case is performed by the test case: the element center coordinate C (329,116) of the element, i.e. "startup board" as shown in fig. 5, means "the element center coordinate in webView container is C (329,116).
Step 208, determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container, and the position of the element to be operated in the view container.
The canvas size of the view container refers to the size of the view container in the programming field, and is used for the program to make layout and positioning on page elements in the web. The canvas size of the view container in the present application may include the canvas width value and the canvas height value of the view container, i.e., the layout and positioning of page elements in the web is performed by the canvas width value and the canvas height value.
The position of the element to be operated in the screen refers to position information of the element to be operated in the screen, and the position of the element to be operated in the screen may also be referred to as an absolute position of the element to be operated in the screen. The position of the element to be operated in the screen in the present application may refer to the element center coordinate position of the element to be operated in the screen.
Specifically, after the server searches for the position of the element to be operated in the view container corresponding to the trigger operation based on the determined network context in response to the trigger operation of the test case in the view container simulation, the server may acquire the canvas size of the view container, determine the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, that is, the server may determine the physical size of the view container according to the position of the view container in the screen, and determine the absolute position of the element to be operated in the screen based on the physical size of the view container, the canvas size of the view container and the position of the element to be operated in the view container.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server determines, through the UI automation test engine, the coordinate position of the webView container in the OEM security home page in the screen, and obtains the coordinate positions of the webView container in the screen as the top left corner vertex a (0,275) and the bottom right corner vertex B (1080,1885), and in the process that the server executes the test case shown in fig. 3, the test case may simulate the element sliding operation triggered by the user in the view container, that is, simulate "user sliding, that is, when the test case executes the logic of" when sliding, that is, create board, the server responds to the above element operation triggered by the simulation of the test case in the view container, and determines that the position of the element in webView containers is: the element center coordinate of this element C (329,116), i.e. "startup board" as shown in fig. 5, is C (329,116) in the webView container; Further, suppose that the server obtains the canvas size of the webView containers includes: the canvas width value K 1 =400 and the canvas height value H 1 =749 of the webView container, the server may determine the physical width value K 0 =1080 and the physical height value H 0 =1885 of the webView container based on the position coordinates a (0,275) and B (1080,1885) of the webView container in the screen, The server may be based on the element center coordinate C (329,116) for the element, the canvas width value K 1 =400 and canvas height value H 1 =749 for the webView container, the physical width value K 0 =1080 for the webView container, Physical height value H 0 =1885, determine "startup board finger" this element is in the screen at the position: p (X, Y), the server determines that the absolute position of this element in the display screen of the bolded black line as shown in fig. 5 is P (X, Y).
Step 210, the position of the element to be operated in the screen is input into the element operation interface, so that the element operation interface executes the interface operation at the input position.
The element operation interface refers to an interface for calling an element operation method, for example, the element operation interface may be an interface for calling an element operation method of the active platform, and the element operation interface in the present application may be a predefined interface, which may be used for calling a gesture interface of a mobile terminal to trigger related UI operations.
The Interface operation refers to UI (User Interface) operation, and the Interface operation in the present application may refer to related UI operation triggered in the mobile terminal device, for example, triggering an Interface sliding operation for "startup board refers to" this element, etc.
Specifically, after the server determines the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, the server may transfer the position of the element to be operated in the screen into the element operation interface, so that the element operation interface performs an interface operation at the transferred position, that is, after the server transfers the position of the element to be operated in the screen into the element operation interface, so that the element operation interface may call a mobile terminal gesture related interface method to trigger a related UI operation at the transferred position.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking the UI test case as an example, assuming that the server determines "startup board finger" based on the element center coordinate C (329,116) of this element as shown in fig. 5, the canvas width value K 1 =400 and canvas height value H 1 =749 of the webView container as shown in fig. 5, the physical width value K 0 =1080 of the webView container as shown in fig. 5, the physical height value H 0 =1885, and the distance value d=275 of the webView container from the top of the screen as shown in fig. 5, after the server inputs the position P (X, Y) of this element in the screen into the element operation interface as shown in fig. 5, the element operation interface may call the gesture sliding interface method of the mobile terminal to trigger the interface sliding operation at the input position P (X, Y).
In this embodiment, the original context of the application interface of the application to be tested is switched to the network context, and the position of the view container in the application interface in the screen is determined; responding to the triggering operation of the test case in the view container simulation, searching the position of the element to be operated in the view container based on the network context, and determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container; and transmitting the position of the element to be operated in the screen into the element operation interface so that the element operation interface executes interface operation at the transmitted position. The original context of the application interface of the application to be tested can be switched to the network context, and the position of the element to be operated in the view container is searched based on the network context, so that the accurate position of the element to be operated in the screen can be determined based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, the accurate position of the element to be operated in the screen is transmitted to the element operation interface, the element operation interface can execute interface operation at the transmitted position, the accurate searching capability of the network context can be utilized, the frequent operation of switching the context is effectively avoided, the accurate operation of the application interface element is realized on the premise that the context is not required to be frequently switched, the compatibility is better, and the accuracy of the interface element operation can be ensured while the interface test efficiency is effectively improved.
In one embodiment, the method further comprises:
determining the address and window information of an application to be tested through a test case;
The switching the original context of the application interface of the application to be tested to the network context comprises the following steps:
Acquiring an application interface of an application to be tested according to the address and the window information;
and switching the original context of the application interface into the network context.
The address refers to the address of the target interface or the target page, for example, the address in the application can be URL, each information resource has a uniform and online address, the address is called URL (Uniform Resource Locator ), which is a uniform resource location mark of WWW, and can be understood as network address.
The window information refers to a window providing information when an application program is operated, and different application programs can correspond to different window information, for example, app1 and App2 are simultaneously operated on a mobile terminal device at a certain moment, the window information corresponding to App1 is window 1, and the window information corresponding to App2 is window 2.
The application interface refers to a display interface corresponding to different application programs, for example, the application interface shown in fig. 4 is a display interface of a transaction application, and may also be referred to as a display page.
Specifically, in the UI test case scenario, when executing a certain test case, the server may run the test case through the UI automation test engine, and determine an address and window information of an application to be tested through the test case, further, the server may obtain an application interface of the application to be tested in the test case according to the determined address and window information, and switch the obtained original context of the application interface into a network context.
For example, as shown in fig. 3, a schematic diagram of execution logic of the UI test case is shown. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, that is, the server may run the test case through the UI automation test engine, for example, through Appium Server, in the initial execution stage of the test case, the server initializes the whole module through Appium Server, and uses the active context of the mobile terminal by default, as shown in fig. 6, which is a schematic diagram of the active context view structure. Fig. 6 (a) is a schematic diagram of an application interface of an application to be tested, fig. 6 (b) is a view structure tree of a native context, and fig. 6 (c) is an information display interface of a certain element selected from the view structure tree of the native context. Fig. 7 is a schematic diagram of a view structure of a web context, fig. 7 (a) is a schematic diagram of an application interface of an application to be tested, and fig. 7 (b) is a view structure tree of the web context. When the test case triggers a "switch to webView context," predefined processing logic is triggered within the automation test framework, i.e., the underlying framework, of the server to effect a switch from the active context as shown in fig. 6 (b) to the web context as shown in fig. 7 (b). The predefined processing logic is that the framework triggers the specific behavior of the natural language of 'switch to webview context', namely the server switches the active context of the target display interface to the web context by matching url and window information of the target display interface. Therefore, the accurate searching capability of the network context can be utilized, frequent context switching operation is effectively avoided, accurate operation of application interface elements is realized on the premise that the context is not required to be frequently switched, and compatibility is better.
In one embodiment, in response to a triggering operation of the test case in the view container simulation, the step of searching for the position of the element to be operated in the view container based on the network context includes:
Responding to the trigger operation of the test case in the view container simulation, and determining an element to be operated;
Searching attribute information of an element to be operated in a network context;
Based on the attribute information of the element to be operated, the position of the element to be operated in the view container is determined.
The attribute information refers to attribute information of the element to be operated in the network context, for example, the attribute information of the element to be operated in the network context in the application can be class attribute or id attribute, that is, class attribute or id attribute of all elements in an application interface of the application to be tested is reserved in the network context, so that any element in the application interface can be accurately positioned based on the class attribute or the id attribute of each element.
Specifically, after the server determines the position of the view container in the application interface in the screen, in the process that the server executes the test case, the test case can simulate the triggering operation of the user in the view container, and then the server responds to the triggering operation of the test case in the view container, determines the element to be operated corresponding to the triggering operation, and searches the attribute information of the element to be operated in the network context; further, the server may determine the position of the element to be operated in the view container based on the attribute information of the element to be operated, that is, the server may determine the coordinate position of the element to be operated in the view container according to each attribute information corresponding to the element to be operated.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server determines, through the UI automation test engine, the coordinate position of the webView container in the OEM security home page in the screen, and obtains the coordinate position of the webView container in the screen as the upper left corner vertex a (0,275) and the lower right corner vertex B (1080,1885), and in the process that the server executes the test case shown in fig. 3, the test case may simulate, in the view container, a triggering operation of a user, for example, the test case may simulate, in the view container, a sliding operation of an element triggered by the user, that is, simulate "user sliding, that is, simulate" creating an creation board "when the test case executes the logic of" when sliding "creating board" is simulated, that is, the server determines, in response to the test case, that the element to be operated is "creating board" when the test board "is simulated and, in the determined web context as shown in fig. 7 (B), the server may find, in the view container, that the relevant information of the element is" creating board "when the test board" is found "the relevant information of the element is the creation board" is found "based on the relevant element" creating board "in the location of the box" webView: c (329,116), i.e. "startup board" as shown in fig. 5 means "the element center coordinates in the webView container is C (329,116). Therefore, the accurate searching capability of the network context can be utilized, frequent context switching operation is effectively avoided, accurate operation of application interface elements is realized on the premise that the context is not required to be frequently switched, and compatibility is better.
In one embodiment, the step of locating the element to be manipulated in the view container based on the network context comprises:
Searching element center coordinates of elements to be operated in a view container based on attribute information of elements in a network context;
The element center coordinates are taken as the position of the element to be operated in the view container.
The element center coordinate refers to the center coordinate corresponding to an element, for example, the "startup board refers to" the element center coordinate of the element in webView containers is C (329,116) as shown in fig. 5.
Specifically, after the server determines the position of the view container in the application interface in the screen, in the process that the server executes the test case, the test case can simulate the triggering operation of the user in the view container, and then the server responds to the triggering operation of the test case in the view container simulation to determine the element to be operated corresponding to the triggering operation; further, the server may search for an element center coordinate of the element to be operated in the view container based on the determined attribute information of each element in the network context, and use the element center coordinate as a position of the element to be operated in the view container.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, in a process that the server executes the test case shown in fig. 3, the test case may simulate a triggering operation of a user in a view container, for example, the test case may simulate a sliding operation of an element triggered by the user, that is, simulate "user sliding" and "creation board finger" in the view container, that is, when the test case executes the logic of "when sliding" creation board finger ", the server determines that an element to be operated is" creation board finger "in response to the test case simulate the above-mentioned element operation triggered by the test case in the view container, and then the server may find" creation board finger "in the determined web context shown in (b) in fig. 7, and determine" creation board finger "based on the found" creation board finger "in the text element's relevant attribute information, and determine" creation board finger "that the text element is located in the container of webView: c (329,116), i.e. "startup board" as shown in fig. 5 means "the element center coordinates in the webView container is C (329,116). Therefore, the accurate searching capability of the network context can be utilized, frequent context switching operation is effectively avoided, accurate operation of the web element is realized on the premise that the context is not required to be frequently switched, and compatibility is better.
In one embodiment, the step of determining the location in the screen of the view container in the application interface comprises:
Determining a region corresponding to a view container of an application interface;
determining a first coordinate and a second coordinate corresponding to the region based on the size of the screen;
the first coordinate and the second coordinate are used as the position of the view container in the application interface in the screen.
The region refers to a display region corresponding to a view container of the application interface, for example, in the application interface schematic diagram of the application to be tested shown in fig. 4, the display region corresponding to the view container is a rectangular region in a dashed line frame in fig. 4. It is understood that the display area corresponding to the view container in the present application includes, but is not limited to, a rectangular area.
The size of the screen refers to the physical size of the screen, and in the present application, the size of the screen may include a width value and a height value of the screen.
The first coordinate and the second coordinate are used to distinguish the coordinates of different positions, for example, the first coordinate in the present application may be the coordinate at the upper left corner of the region corresponding to the view container, and the second coordinate may be the coordinate at the lower right corner of the region corresponding to the view container.
Specifically, after the server obtains the original context of the application interface of the application to be tested through the UI automation test engine and switches the original context of the application interface of the application to be tested into the network context, the server can determine the area corresponding to the view container of the application interface; further, the server may determine the first coordinate and the second coordinate corresponding to the region based on the size of the screen, and use the first coordinate and the second coordinate as the position of the view container in the application interface in the screen.
For example, as shown in fig. 4, an application interface of an application to be tested is shown. Taking a UI test case as an example, when the test case shown in FIG. 3 is executed, the server can acquire the Native context of the original security first page of the OEM through the UI automation test engine Appium Server, and switch the Native context of the original security first page of the OEM into the web context, so as to acquire the web context corresponding to the original security first page of the OEM, and then the server can determine the position of the view container in the application interface in the screen through the UI automation test engine Appium Server, namely the server can determine that the area corresponding to the webView container in the original security first page in the screen is a rectangular area in a dotted line frame in FIG. 4 through the UI automation test engine; further, the server may determine, based on the size of the screen area of the mobile terminal device shown by the bold black line frame in fig. 4, the first coordinate and the second coordinate corresponding to the rectangular area, that is, determine the coordinates of the top left corner vertex a and the bottom right corner vertex B of the rectangular area, and use the determined coordinates of the top left corner vertex a and the bottom right corner vertex B of the rectangular area as the position of the webView container in the OEM security home page, that is, the coordinate position of the webView container in the screen is: the upper left corner vertex a (0,275) and the lower right corner vertex B (1080,1885) may use the coordinates of two vertices at the diagonal vertices of the rectangular area corresponding to the webView container as the coordinate location of the webView container in the screen. Therefore, the accurate position information of the current mobile terminal browser container control in the mobile phone screen can be determined through the predefined interface bottom logic, the accurate position information is provided for realizing accurate coordinate calculation of elements through a preset algorithm, the expected effect can be realized through gestures of the active platform, and the UI test efficiency is higher compared with a scheme of frequently switching the context.
In one embodiment, as shown in FIG. 8, the region is a rectangular region, and the first and second coordinates are coordinates at diagonal vertices of the rectangular region; the canvas size of the view container includes a canvas width value and a canvas height value; the step of determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, comprises:
step 802, determining a container width value and a container height value based on the first coordinates and the second coordinates;
step 804, determining a location of the element to be operated in the screen based on the container width value, the container height value, the canvas width value, the canvas height value, and the location of the element to be operated in the view container.
The container width value refers to the physical width value of the view container, the container height value refers to the physical height value of the view container, and the container width value and the container height value of the view container are absolute pixel values occupied by the web container in a mobile terminal screen.
Specifically, after the server searches for the position of the element to be operated in the view container corresponding to the trigger operation based on the determined network context in response to the trigger operation of the test case in the view container simulation, the server may acquire the canvas size of the view container, that is, the server may acquire the canvas width value and the canvas height value of the view container, and determine the container width value and the container height value of the view container based on the position of the view container in the screen, that is, the server may determine the container width value and the container height value based on the determined first coordinate and the second coordinate of the rectangular area corresponding to the view container, and determine the position of the element to be operated in the screen based on the determined container width value, the determined container height value, the determined canvas width value, the determined canvas height value, and the determined position of the element to be operated in the view container.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server determines, through the UI automation test engine, the coordinate position of the webView container in the OEM security home page in the screen, and obtains the coordinate positions of the webView container in the screen as the top left corner vertex a (0,275) and the bottom right corner vertex B (1080,1885), and in the process that the server executes the test case shown in fig. 3, the test case may simulate the element sliding operation triggered by the user in the view container, that is, simulate "user sliding, that is, when the test case executes the logic of" when sliding, that is, create board, The server responds to the above element operation triggered by the simulation of the test case in the view container, and determines that the position of the element in webView containers is: c (329,116), i.e. "startup board" as shown in fig. 5, means "the element center coordinates in the webView container is C (329,116); Further, suppose that the server obtains the canvas size of the webView containers includes: the canvas width value K 1 =400 and the canvas height value H 1 =749 of the webView container, the server may be based on the determined first coordinates a (0,275) and second coordinates B (1080,1885) of the rectangular region to which the webView container corresponds, Determining the container width value K 0 =1080 and the container height value H 0 =1885 for the webView container, the server may be based on the element center coordinate C (329,116) for the element, the canvas width value K 1 =400 and the canvas height value H 1 =749 for the webView container, the server may determine the container width value k3962=1080 and the container height value H 0 =1885 for the element, the server may determine the container width value K 1 =400 and the container height value H 1 =749 for the container, The container width value K 0 =1080 and the container height value H 0 =1885 of the webView containers determine that the absolute position of this element in the screen is: p (X, Y), the server determines that the absolute position of this element in the display screen of the bolded black line as shown in fig. 5 is P (X, Y). Therefore, when the web page of the native platform of the mobile terminal is tested, the context can not be switched, accurate coordinate calculation of elements is realized through a preset algorithm, the expected effect is realized by utilizing the gesture of the native platform, and the UI test efficiency is higher compared with a scheme of frequently switching the context.
In one embodiment, the position of the element to be operated in the view container includes a first abscissa and a first ordinate corresponding to the element to be operated; determining a position of the element to be operated in the screen based on the container width value, the container height value, the canvas width value, the canvas height value, and the position of the element to be operated in the view container, comprising:
Determining a second abscissa of the element to be operated in the screen based on the first abscissa, the container width value and the canvas width value;
Determining a distance value of the view container from the top of the screen based on the container height value;
a second ordinate of the element to be operated in the screen is determined based on the first ordinate, the container height value, the canvas height value, and the distance value.
Wherein the first abscissa and the first ordinate refer to the corresponding abscissa and ordinate of the element to be operated in the view container; the second abscissa and the first ordinate refer to the corresponding abscissa and ordinate of the element to be operated in the screen.
Specifically, the server responds to the trigger operation of the test case in the view container simulation, searches a first abscissa and a first ordinate of an element to be operated in the view container corresponding to the trigger operation based on the determined network context, and then can determine a second abscissa of the element to be operated in the screen based on the first abscissa of the element to be operated, a container width value of the view container and a canvas width value of the view container; further, the server may determine a distance value of the view container from the top of the screen based on the container height value of the view container, and determine a second ordinate of the element to be operated in the screen based on the first ordinate corresponding to the element to be operated, the container height value of the view container, the canvas height value of the view container, and the distance value of the view container from the top of the screen.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, when the test case shown in fig. 3 is executed, the server determines, through the UI automation test engine, the coordinate position of the webView container in the OEM security home page in the screen, and obtains the coordinate positions of the webView container in the screen as the top left corner vertex a (0,275) and the bottom right corner vertex B (1080,1885), and in the process that the server executes the test case shown in fig. 3, the test case may simulate the element sliding operation triggered by the user in the view container, that is, simulate "user sliding, that is, when the test case executes the logic of" when sliding, that is, create board, The server responds to the above element operation triggered by the simulation of the test case in the view container, and determines that the position of the element in webView containers is: c (329,116), i.e. "startup board" as shown in fig. 5, means "the element center coordinates in the webView container is C (329,116); further, suppose that the server obtains the canvas size of the webView containers includes: the canvas width value K 1 =400 and the canvas height value H 1 =749 of the webView container, and the server may determine the container width value K 0 =1080 of the webView container based on the position coordinates a (0,275) and B (1080,1885) of the webView container in the screen, The container height value H 0 = 1885, then the server may be based on the first abscissa 329 in the element center coordinates (329,116) of the element, the container width value K 0 = 1080 for the webView container, the canvas width value K 1 = 400 for the webView container, determining a second abscissa of the element to be operated in the screen as X; Further, assuming that the server determines that the distance value of the webView container from the top of the screen is d=275 based on the container height value H 0 =1885, the server may be based on the first ordinate 116 in the element center coordinate (329,116) of the element, the container height value H 0 =1885 of the webView container, the server may be a server, The canvas height value H 1 =749 of the webView container and the distance value d=275 of the webView container from the top of the screen, and determining the second ordinate of the element to be operated in the screen as Y, so as to determine the position of the element "entrepreneur board finger" in the screen as: p (X, Y), the server determines that the absolute position of this element in the display screen of the bolded black line as shown in fig. 5 is P (X, Y). Therefore, when the web page of the native platform of the mobile terminal is tested, the context can not be switched, accurate coordinate calculation of elements is realized through a preset algorithm, the expected effect is realized by utilizing the gesture of the native platform, and the UI test efficiency is higher compared with a scheme of frequently switching the context.
In one embodiment, the step of determining a second abscissa of the element to be manipulated in the screen based on the first abscissa, the container width value, and the canvas width value comprises:
determining a product between the first abscissa and the container width value to obtain a first product value;
Determining a ratio between the first product value and the canvas width value, and taking the determined ratio as a second abscissa of the element to be operated in the screen;
A step of determining a second ordinate of the element to be operated in the screen based on the first ordinate, the container height value, the canvas height value and the distance value, comprising:
determining a product between the first ordinate and the container height value to obtain a second product value;
determining the sum between the canvas height value and the distance value to obtain a first sum value;
A ratio between the second product value and the first sum value is determined, and the determined ratio is taken as a second ordinate of the element to be operated in the screen.
Specifically, the server responds to the trigger operation of the test case in the view container simulation, searches a first abscissa and a first ordinate of an element to be operated in the view container corresponding to the trigger operation based on the determined network context, and then can determine a second abscissa of the element to be operated in the screen based on the first abscissa of the element to be operated, a container width value of the view container and a canvas width value of the view container; further, the server can determine a product between a first ordinate corresponding to the element to be operated and a container height value to obtain a second product value, and determine a sum of the canvas height value and a distance value of the view container from the top of the screen to obtain a first sum value; further, the server may determine a ratio between the second product value and the first sum value, and use the determined ratio as a second ordinate of the element to be operated in the screen. Wherein, the calculation formula of the absolute position coordinates of the element to be operated, i.e. the target element, in the screen is shown in the following formula (1) (2):
Absolute abscissa x= (relative X of target element in web container X physical width of web container)/canvas width of web container in screen; (1)
Absolute ordinate Y of target element in screen: (the relative Y of the target element in the web container is the physical height of the web container)/canvas height of the web container + the distance of the web container from the top of the screen; (2)
That is, the server in the present application may calculate the second abscissa of the element to be operated in the screen based on the above formula (1), and calculate the second ordinate of the element to be operated in the screen based on the above formula (2). Therefore, when the web page of the native platform of the mobile terminal is tested, the context can not be switched, accurate coordinate calculation of elements is realized through a preset algorithm, the expected effect is realized by utilizing the gesture of the native platform, and the UI test efficiency is higher compared with a scheme of frequently switching the context.
In one embodiment, the step of inputting the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs the interface operation at the input position includes:
And transmitting the element to be operated into the element operation interface at the second abscissa and the second ordinate of the screen, so that the element operation interface executes corresponding interface operation at the transmitted second abscissa and second ordinate.
Specifically, after determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, the server may enter the absolute position of the element to be operated in the screen, that is, the second abscissa and the second ordinate, into the element operation interface so that the element operation interface performs the interface operation at the second abscissa and the second ordinate positions, that is, after the server enters the absolute position of the element to be operated in the screen into the element operation interface, so that the element operation interface may call the mobile terminal gesture related interface method to trigger the related UI operation at the second abscissa and the second ordinate positions.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking the UI test case as an example, assuming that the server determines "startup board finger" that the absolute position of this element in the display screen of the bolded black line as shown in fig. 5 is P (X1, Y1) based on the element center coordinate C (329,116) of this element, the canvas width value K 1 =400 and the canvas height value H 1 =749 of the webView container as shown in fig. 5, the physical width value K 0 =1080 of the webView container as shown in fig. 5, the physical height value H 0 =1885, and the distance value d=275 of the webView container from the top of the screen as shown in fig. 5, after the server inputs the position P (X1, Y1) of this element in the screen into the element operation interface, the element operation interface can call the gesture sliding interface method of the mobile terminal to trigger the interface sliding operation at the input position P (X1, Y1). Therefore, the method provided by the embodiment of the application does not depend on specific business logic of the web page, realizes UI operation based on the script simulating the gesture of the user, and has high compatibility.
In one embodiment, the step of inputting the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs the interface operation at the input position includes:
When the triggering operation is a sliding operation, the position of the element to be operated in the screen is transmitted to the element operation interface, so that the element operation interface executes interface sliding operation at the transmitted position;
when the triggering operation is a clicking operation, the position of the element to be operated in the screen is transmitted into the element operation interface, so that the element operation interface executes the interface clicking operation at the transmitted position.
Specifically, after the server determines the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container, and the position of the element to be operated in the view container, the server may transfer the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs an interface operation at the transferred position, and when the trigger operation is a sliding operation, the server transfers the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs an interface sliding operation at the transferred position; when the triggering operation is a clicking operation, the server transmits the position of the element to be operated in the screen to the element operation interface so that the element operation interface executes the interface clicking operation at the transmitted position. Thereby the processing time of the product is reduced,
In one embodiment, the position of the element to be operated in the screen is input into the element operation interface, so that after the element operation interface performs the interface operation at the input position, the method further comprises:
Acquiring an interface display result of the element operation interface after the element operation interface executes interface operation at the input position;
and comparing the interface display result with the target interface display result to obtain a test result of the application interface of the application to be tested.
The target interface display result refers to an expected interface display result of an application interface of the application to be tested, and the target interface display result may be a preset interface display result, for example, a current user may see information related to a first screen of the application to be tested on a screen, and when an operation of sliding up one screen is triggered by an automatic test frame, the user may see information related to a second screen of the application to be tested on the screen in normal conditions, i.e. the target interface display result may be preset to display information related to the second screen.
Specifically, the server transmits the position of the element to be operated in the screen to the element operation interface, so that after the element operation interface executes the interface operation at the transmitted position, the server can obtain the interface display result of the element operation interface after executing the interface operation at the transmitted position, and compare the interface display result with the target interface display result to obtain the test result of the application interface of the application to be tested.
For example, as shown in fig. 5, an application interface diagram is provided for determining location information of an element to be operated in a view container. Taking a UI test case as an example, assuming that the server transmits the position P (X, Y) of the element in the screen to the element operation interface, the element operation interface may call a gesture sliding interface method of the mobile terminal, so as to trigger the interface sliding operation at the position P (X, Y) to be transmitted, the server may obtain an interface display result a of the element operation interface after the interface operation is performed at the position P (X, Y) to be transmitted, compare the interface display result a with a target interface display result a 0, and assume that the interface display result a is compared with a target interface display result a 0, and the obtained comparison result is consistent, so that the server may determine that the test result of the application interface of the application to be tested is test success; if the interface display result A is compared with the target interface display result A 0, and the obtained comparison result is inconsistent, the server can determine that the test result of the application interface of the application to be tested is test failure.
For another example, taking the mobile terminal as a mobile phone, a relatively long news list page currently staying on the mobile phone App is taken, and when the operation of sliding up one screen is triggered by the automatic test framework, the user can see news information related to the second screen on the screen under normal conditions. In the scenario of automated testing, a general specification is to write an assertion to verify whether an automated testing framework can identify a page element existing in the second screen, and when the page element existing in the second screen is identified, the automated testing case can be identified as successful; if the page element in the second screen is not identified, the automated test case is failed. Therefore, the accurate coordinate position of the web page element on the actual physical screen is calculated, and then the related UI behavior is triggered by performing behavior operation simulating the user gesture on the specific coordinate position through the automatic test frame, so that no coupling relation exists between the related UI behavior and specific business, and the compatibility is higher.
In one embodiment, before the original context of the application interface of the application to be tested is switched to the network context, the method is applied to the computer end, and the method further includes:
establishing connection between a computer end and equipment to be tested;
Running a screen throwing application at a computer end;
Recording the operation executed on the application interface and the corresponding interface element information through the screen-throwing application;
And generating a test video according to the application interface, the operation and the interface element information.
The computer end refers to a device for running the UI automation test engine, for example, the computer end in the present application may be a desktop computer deployed in different areas.
The device to be tested refers to a device running an application to be tested, for example, the device to be tested in the application can be mobile phone terminals of different operating systems.
Specifically, a computer terminal PC (Personal computer) is taken as an example for explanation. The PC end can establish communication connection with the equipment to be tested, after the PC end and the equipment to be tested are established communication connection, the screen-throwing application is operated on the PC end, the screen-throwing application is operated, an application interface displayed by the equipment to be tested can be thrown on the PC end, and 1: 1. After the PC end runs the screen throwing application, starting a behavior detection service, detecting and acquiring all operations on the screen throwing through acquiring a handle, and generating relevant test data; further, the PC side can record each operation executed by the test case on the device to be tested aiming at the application interface of the application to be tested through the screen throwing application and interface element information displayed in the application interface of the application to be tested when executing each operation; after the PC end starts the UI automatic test engine, the UI automatic test engine runs the test case, a series of operations are executed on the application interface of the application to be tested by simulating a user by executing the test case, and meanwhile, the application program can respond to the operations executed by the test case, so that the PC end can generate a corresponding test video according to each recorded operation executed on the application interface of the application to be tested and interface element information displayed by the application interface of the application to be tested when each operation is executed. Therefore, when the web page of the native platform of the mobile terminal is automated, the context can not be switched, accurate coordinate calculation of elements is realized through a preset algorithm, the expected effect is realized by utilizing the gesture of the native platform, and compared with the scheme of frequently switching the context, the automatic test efficiency is higher, the corresponding test video can be automatically generated while the test efficiency is effectively improved, and convenience is brought to users.
The application also provides an application scene, which applies the test method of the application interface. Specifically, the application of the test method of the application interface in the application scene is as follows:
When a user wants to test a page embedded in a WebView mobile terminal browser control in a mobile terminal native App through a PC terminal, a test tool can be run through the PC terminal, for example, when the page embedded in the WebView mobile terminal browser control in the mobile terminal native App (android or iOS application) is tested through Appium Server, the above test method of the application interface can be adopted, that is, when the page embedded in the WebView mobile terminal browser control in the mobile terminal native App is tested through Appium Server, appium Server can obtain an original context of the page of the application to be tested, that is, the mobile terminal native App is embedded in the WebView mobile terminal browser control, and switch the original context into a network context; further, appium Server may determine a position of the view container in the page in the screen, and in response to a triggering operation of the test case in the view container simulation, search for a position of the element to be operated in the view container based on the network context; appium Server determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container, and transmitting the position of the element to be operated in the screen into the element operation interface so that the element operation interface performs interface operation at the transmitted position. Therefore, element searching can still be performed based on the web context view structure, when the upper layer business expects gesture operation (such as sliding, long pressing and the like), the accurate position information (different from the position information in the web context view) of the current element on the screen is calculated through unified calculation mapping logic, then the mobile terminal gesture related interface method of Appium Server is called, the accurate coordinates are transmitted, the accurate operation of the web page element in the UI automation scene is realized, the interface test efficiency is improved, and the operation accuracy of the interface element is ensured.
The method provided by the embodiment of the application can be applied to the scenes of various UI test cases. The following describes a test method of an application interface provided by the embodiment of the present application, taking a scenario of a UI test case as an example.
The Appium Server is a UI automation test engine, which can support UI automation tests of multiple platforms such as Android, iOS, web.
UI automation page context: appium Server, carrying out element searching and operation based on page context by using the obtained page structure information of the tested application during operation.
Native context: appium Server when testing a native App (android or iOS application) of the mobile terminal, the obtained UI automation page context is structured as a document in a layer of XML format.
Web context: appium Server testing a page embedded in a browser control of a WebView mobile terminal in a native App (android or iOS application) of the mobile terminal, and obtaining a UI automation page context with a native HTML structure.
Css selector: specifying which elements the CSS rule will be applied to.
In the conventional application interface test, when a page embedded in a WebView mobile terminal browser control in a mobile terminal Native App (android or iOS application) is tested through Appium Server, two types of context structures can be selected, and a Native context or a Web context is used. If the native context is used, the complete gesture operation method of the mobile terminal provided by the Appium Server engine can be directly used to perform various complex gesture operations. But the way of using the active context has the disadvantage that: the elements developed by the front page lose css attribute of the front end in the native context, and only a layer Appium Server of view structure after engine escape is obtained, so that unique elements are difficult to accurately position in a scene with complex logic. If a web context mode is used, specific elements can be accurately located and searched based on css attributes of the front end, but the web context is more in consideration of scenes such as mouse clicking, keyboard input events and the like of the PC end, and support for gestures of the mobile end is lacking.
The page development mode in the browser control is a web-end-based page development mode, so that the web context of the page can be used when an automatic test is performed; meanwhile, when the browser control is operated on the mobile terminal platform of android or ios, page elements in the browser control can be analyzed into information of a layer of native platform, so that a native context mode can be used in automatic test.
Traditional scheme (1): the operation of the web page is directly realized based on the Native context;
FIG. 6 is a schematic diagram of a native context view structure. Fig. 6 (a) is a hybrid technology developed by H5, running on the page of the mobile platform. The App Source shown in fig. 6 (b) is a view structure tree obtained by the native context, and the complete gesture operation method of the mobile terminal may be invoked in the native context. I.e. in the view structure of the active context as shown in fig. 6: the information obtained by each element is only text, and other information such as an element identifier id and the like is not available.
Traditional scheme (2): and searching elements in a web context mode, switching to a native context mode when element operation is required, calling a mobile terminal gesture operation method, and switching to a web context mode to perform subsequent operation.
As shown in fig. 9, a schematic view structure obtained using the web context is shown. In fig. 9 (a) is an application interface schematic diagram of an application to be tested, in fig. 9 (b) is a view structure obtained by means of a web context, and in the view structure shown in fig. 9 (b), a complete HTML tag can be seen, so that accurate element positioning can be performed by using a complete css selector syntax or xpath expression at the web end. However, when the elements need to be slid, pressed, etc., the active context needs to be switched to call the method of Appium Server engine, and then call the complete gesture API method of the mobile terminal.
The css attribute is a general technology of front-end development of pages, basically each page element has a class attribute or an id attribute, and the elements can be accurately found by using the attributes. The operations that the web context may provide to the elements are encapsulated by the selenium automation test framework, which originates from the PC-side web automation test scenario providing mainly mouse clicks, keyboard inputs, etc., while the mobile-side web users rely more on gestures (e.g., swipes, long presses, drags, etc.), which have little capability to provide relevant event simulation capabilities.
Traditional scheme (3): javaScript code is written and executed by a JavaScript executor of the Appium Server engine, and the operation behavior of the element is triggered in a code mode.
The conventional scheme (1) has the following disadvantages that the web page is directly operated in a Native context-based manner:
If the element searching and positioning can not be performed through the functions related to the front-end CSS selector based on Native context, accurate positioning of the unique element is difficult for the control with part of text changeable.
In the conventional scheme (2), the following drawbacks exist in that the element searching is performed by the web context mode, when the element operation is required, the element operation is switched to the active context mode, the mobile terminal gesture operation method is called, and then the subsequent operation is performed by the web context mode:
According to the method and the device, the capability of accurately searching the elements by the Web context can be fully utilized, meanwhile, accurate operation can be achieved by utilizing gestures of the native context, but context switching is required to be frequently conducted in the process of executing the automation use case, and the operation of switching the context is time-consuming, for example, a mac platform usually needs about 20 seconds, a windows platform needs about 50 seconds, and the UI automation use case executing efficiency can be greatly reduced.
Traditional scheme (3): the method for writing the javaScript code triggers the accurate operation of the UI automation use case, and the scheme has the following defects: on the mobile web side, many gesture operations are touch gestures customized by a web platform, and the touch gestures are not completely and definitely specified, so that the written JavaScript code is difficult to be compatible with pages with more coverage, and the universality is not good enough.
Therefore, in order to solve the problems in the conventional manner, the application provides a method for realizing accurate operation of a web page in a UI automation scene based on a mobile terminal native platform gesture. The predefined interface bottom layer maintains the accurate position information of the current mobile terminal browser container control in the mobile phone screen, the accurate position information of the elements of the preparation operation on the mobile phone screen is calculated according to the relative position information of the elements of the current preparation operation in the browser, and then the predefined interface directly calls the mobile terminal gesture interface to trigger related UI operation. The method provided by the application has the advantages that the capability of accurately searching in the context of the web can be utilized, meanwhile, on the premise of not frequently switching the context, the gesture of the mobile terminal is completely utilized, the accurate operation of the web element is realized, the compatibility is better, the interface test efficiency is improved, and the operation accuracy of the interface element is ensured.
On the product side, as shown in fig. 3, a schematic diagram of execution logic of the UI test case is shown. Taking the test case shown in fig. 3 as an example in the UI test case, when a specific case scenario is executed, the user needs to switch to the web context first, and then in the case code of line 34 shown in fig. 3, a relative sliding operation for the page element represented by "startup board finger" is triggered. The test case executes logic implementation of ' when sliding the entrepreneur board finger-0.7 ' to be positioned at the bottom layer of the frame, the bottom layer of the frame can accurately calculate the accurate position coordinate of the entrepreneur board finger ' element in the mobile phone screen, then element sliding operation of the mobile terminal is invoked, a sliding gesture is triggered on the mobile phone screen, an expected behavior is achieved, and whether the display content of the screen meets the expected or not is then asserted. For example, a relatively long news listing page currently resident on the mobile App, when the slide up one screen operation is triggered by the automated test framework, then the user can normally see the second screen related news information on the screen. In the scenario of automated testing, a general specification is to write an assertion to verify whether an automated testing framework can identify a page element existing in the second screen, and when the page element existing in the second screen is identified, the automated testing case can be identified as successful; if the page element in the second screen is not identified, the automated test case is failed.
On the technical side, as shown in fig. 10, an overall flow diagram of a method for realizing accurate operation of a web page in a UI automation scene based on a mobile-end native platform gesture is provided. Namely, the method for realizing the accurate operation of the web page in the UI automation scene based on the mobile terminal native platform gesture provided by the application has the overall flow shown in (b) of fig. 10, and the overall flow shown in (a) of fig. 10 is an execution logic schematic diagram of a test case, and is as follows:
(1) The use case starts the execution stage, initializes the whole module, and uses the mobile terminal active context by default.
(2) The use case triggers 'switch to webView context', triggers predefined logic within the automated test framework, switches from the active context to the web context, and records the complete coordinate information of the webview container in the active context in the current screen.
Because the current scene is the mobile terminal App, when the automatic test framework is initialized, the default is the active context; the predefined logic is that the framework triggers the specific behavior of the natural language of 'switch to webview context', and the virtual context is switched to the web context by matching url and window information of the target page.
(3) When the "when sliding" startup board finger "-0.7" natural language grammar is called, the automated test framework first finds the text element's position in webview containers according to the web context [329,116]; wherein [329,116] the element location information is returned directly by the automated test framework based on the coordinate values of the element from the left and top of the HTML document when viewed from the web context.
(4) In the application interface diagram of the application under test shown in fig. 4, assume the canvas width of the web container: 400 canvas height of web container: 749, physical width of web container: 1080, physical height of web container: 1885 distance of web container from screen top: 275, relative X of target element in web container: 329, relative Y of target element in web container: 116, the automated test framework may calculate an absolute position coordinate P (x 1, y 1) of the target element in the screen based on an absolute position coordinate calculation formula of the target element in the screen, where the absolute position coordinate calculation formula of the target element in the screen is shown in the foregoing formulas (1) (2).
It can be understood that the physical width and the physical height of the web container in the embodiment of the application are the absolute pixel values occupied by the web container in the mobile phone screen, and the canvas width and the canvas height of the web container are virtual and are the width and the height in the programming field, so that the program can perform layout and positioning on page elements in the web.
(5) And (3) calling an element operation method of the native platform according to the absolute position coordinate of the target element in the screen calculated in the step (4), and transmitting the precise coordinate, so that an expected UI operation method is realized.
The technical scheme of the application has the beneficial effects that:
(1) In a scene of testing a web page, the mobile terminal native platform can accurately position elements by using a CSS selector of a web context view to search the elements;
(2) When the web page of the native platform of the mobile terminal is operated, the context can be switched frequently, accurate coordinate calculation of elements is realized through a preset algorithm, and the expected effect is realized by utilizing the gesture of the native platform, so that the testing efficiency is higher compared with the scheme of switching the context frequently;
(3) The method provided by the embodiment of the application does not depend on specific business logic of the web page, realizes UI operation based on the script simulating the gesture of the user completely, and has high compatibility. In the scheme of the application, the accurate coordinate position of the web page element on the actual physical screen can be finally calculated, and then the related UI behavior is triggered by simulating the behavior operation of the user gesture at the specific coordinate position through the automatic test framework, so that the compatibility is higher because no coupling relation exists between the related UI behavior and the specific service.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a testing device for the application interface for realizing the testing method of the application interface. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the test device for one or more application interfaces provided below may be referred to the limitation of the test method for an application interface hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 11, there is provided a test apparatus for an application interface, including: a switching module 1102, a determining module 1104, a searching module 1106, and an incoming module 1108, wherein:
and the switching module 1102 is configured to switch an original context of an application interface of the application to be tested into a network context.
A determining module 1104 is configured to determine a position of the view container in the application interface in the screen.
A searching module 1106, configured to search, based on the network context, a location of an element to be operated in the view container in response to a trigger operation of the test case in the view container simulation.
The determining module 1104 is further configured to determine a location of the element to be operated in the screen based on a location of the view container in the screen, a canvas size of the view container, and a location of the element to be operated in the view container.
And an input module 1108, configured to input the position of the element to be operated in the screen into an element operation interface, so that the element operation interface performs an interface operation at the input position.
In one embodiment, the apparatus further comprises: and an acquisition module. The determining module is also used for determining the address and window information of the application to be tested through the test case; the acquisition module is used for acquiring an application interface of the application to be tested according to the address and the window information; the switching module is also used for switching the original context of the application interface into the network context.
In one embodiment, the apparatus further comprises: and (5) a searching module. The determining module is also used for responding to the trigger operation of the test case in the view container simulation and determining elements to be operated; the searching module is used for searching the attribute information of the element to be operated in the network context; the determining module is further used for determining the position of the element to be operated in the view container based on the attribute information of the element to be operated.
In one embodiment, the determining module is further configured to determine an area corresponding to a view container of the application interface; determining a first coordinate and a second coordinate corresponding to the region based on the size of the screen; and taking the first coordinate and the second coordinate as positions of view containers in the application interface in a screen.
In one embodiment, the region is a rectangular region, and the first and second coordinates are coordinates at diagonal vertices of the rectangular region; the canvas size of the view container includes a canvas width value and a canvas height value; the determining module is further configured to determine a container width value and a container height value based on the first coordinate and the second coordinate; and determining the position of the element to be operated in the screen based on the container width value, the container height value, the canvas width value, the canvas height value and the position of the element to be operated in the view container.
In one embodiment, the position of the element to be operated in the view container includes a first abscissa and a first ordinate corresponding to the element to be operated; the determining module is further configured to determine a second abscissa of the element to be operated in the screen based on the first abscissa, the container width value, and the canvas width value; determining a distance value of the view container from the top of the screen based on the container height value; a second ordinate of the element to be operated in the screen is determined based on the first ordinate, the container height value, the canvas height value, and the distance value.
In one embodiment, the determining module is further configured to determine a product between the first abscissa and the container width value, resulting in a first product value; determining a ratio between the first product value and the canvas width value, and taking the determined ratio as a second abscissa of the element to be operated in the screen; determining a product between the first ordinate and the container height value to obtain a second product value; determining a sum between the canvas height value and the distance value to obtain a first sum value; and determining a ratio between the second product value and the first sum value, and taking the determined ratio as a second ordinate of the element to be operated in the screen.
In one embodiment, the input module is further configured to, when the trigger operation is a sliding operation, input a position of the element to be operated in the screen into an element operation interface, so that the element operation interface performs an interface sliding operation at the input position; when the triggering operation is a clicking operation, the position of the element to be operated in the screen is transmitted to an element operation interface, so that the element operation interface executes an interface clicking operation at the transmitted position.
In one embodiment, the apparatus further comprises: and a comparison module. The acquisition module is also used for acquiring an interface display result of the element operation interface after the interface operation is executed at the input position; and the comparison module is used for comparing the interface display result with a target interface display result to obtain a test result of the application interface of the application to be tested.
In one embodiment, the apparatus further comprises: the system comprises a building module, an operation module, a recording module and a generating module, wherein the building module is used for building connection between the computer end and equipment to be tested; the operation module is used for operating the screen-throwing application at the computer end; the recording module is used for recording the operation executed on the application interface and the corresponding interface element information through the screen-throwing application; and the generating module is used for generating a test video according to the application interface, the operation and the interface element information.
The modules in the test device of the application interface may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server or a terminal, and in this embodiment, the description will be given taking the computer device as an example, and the internal structure thereof may be as shown in fig. 12. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing test data of the application interface. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of testing an application interface.
It will be appreciated by those skilled in the art that the structure shown in FIG. 12 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (15)

1. A method for testing an application interface, the method comprising:
Switching an original context of an application interface of an application to be tested into a network context;
determining the position of a view container in the application interface in a screen;
Responding to the triggering operation of the test case in the view container simulation, and searching the position of an element to be operated in the view container based on the network context;
Determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container;
and transmitting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the transmitted position.
2. The method according to claim 1, wherein the method further comprises:
determining the address and window information of the application to be tested through the test case;
The switching the original context of the application interface of the application to be tested to the network context comprises the following steps:
acquiring an application interface of the application to be tested according to the address and the window information;
and switching the original context of the application interface into the network context.
3. The method of claim 1, wherein the searching for the location of the element to be operated in the view container based on the network context in response to the triggering operation of the test case in the view container simulation comprises:
responding to the trigger operation of the test case in the view container simulation, and determining an element to be operated;
searching attribute information of the element to be operated in the network context;
and determining the position of the element to be operated in the view container based on the attribute information of the element to be operated.
4. The method of claim 1, wherein the determining the location in the screen of the view container in the application interface comprises:
determining a region corresponding to a view container of the application interface;
Determining a first coordinate and a second coordinate corresponding to the region based on the size of the screen;
and taking the first coordinate and the second coordinate as positions of view containers in the application interface in a screen.
5. The method of claim 4, wherein the region is a rectangular region, and the first and second coordinates are coordinates at diagonal vertices of the rectangular region; the canvas size of the view container includes a canvas width value and a canvas height value;
The determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container comprises:
determining a container width value and a container height value based on the first coordinate and the second coordinate;
And determining the position of the element to be operated in the screen based on the container width value, the container height value, the canvas width value, the canvas height value and the position of the element to be operated in the view container.
6. The method of claim 5, wherein the position of the element to be manipulated in the view container comprises a first abscissa and a first ordinate corresponding to the element to be manipulated;
The determining the position of the element to be operated in the screen based on the container width value, the container height value, the canvas width value, the canvas height value, and the position of the element to be operated in the view container includes:
determining a second abscissa of the element to be operated in the screen based on the first abscissa, the container width value and the canvas width value;
Determining a distance value of the view container from the top of the screen based on the container height value;
a second ordinate of the element to be operated in the screen is determined based on the first ordinate, the container height value, the canvas height value, and the distance value.
7. The method of claim 6, wherein the determining a second abscissa of the element to be operated in the screen based on the first abscissa, the container width value, and the canvas width value comprises:
determining a product between the first abscissa and the container width value to obtain a first product value;
Determining a ratio between the first product value and the canvas width value, and taking the determined ratio as a second abscissa of the element to be operated in the screen;
the determining a second ordinate of the element to be operated in the screen based on the first ordinate, the container height value, the canvas height value, and the distance value includes:
Determining a product between the first ordinate and the container height value to obtain a second product value;
determining a sum between the canvas height value and the distance value to obtain a first sum value;
And determining a ratio between the second product value and the first sum value, and taking the determined ratio as a second ordinate of the element to be operated in the screen.
8. The method according to claim 1, wherein the inputting the position of the element to be operated in the screen into an element operation interface to cause the element operation interface to perform an interface operation at the input position includes:
when the triggering operation is a sliding operation, the position of the element to be operated in the screen is transmitted to an element operation interface, so that the element operation interface executes interface sliding operation at the transmitted position;
when the triggering operation is a clicking operation, the position of the element to be operated in the screen is transmitted to an element operation interface, so that the element operation interface executes an interface clicking operation at the transmitted position.
9. The method of claim 1, wherein the entering the location of the element to be operated in the screen into an element operation interface causes the element operation interface to perform an interface operation at the entered location, the method further comprising:
Acquiring an interface display result of the element operation interface after the interface operation is performed at the input position;
And comparing the interface display result with a target interface display result to obtain a test result of the application interface of the application to be tested.
10. The method of claim 1, wherein the method is applied to a computer side before the original context of the application interface of the application to be tested is switched to the network context, the method further comprising:
establishing connection between the computer end and equipment to be tested;
Running a screen throwing application at the computer end;
recording the operation executed on the application interface and the corresponding interface element information through the screen-throwing application;
And generating a test video according to the application interface, the operation and the interface element information.
11. A test apparatus for an application interface, the apparatus comprising:
the switching module is used for switching the original context of the application interface of the application to be tested into the network context;
a determining module, configured to determine a position of a view container in the application interface in a screen;
The searching module is used for responding to the triggering operation of the test case in the view container simulation and searching the position of the element to be operated in the view container based on the network context;
The determining module is further used for determining the position of the element to be operated in the screen based on the position of the view container in the screen, the canvas size of the view container and the position of the element to be operated in the view container;
and the input module is used for inputting the position of the element to be operated in the screen into an element operation interface so that the element operation interface executes interface operation at the input position.
12. The apparatus for testing an application interface according to claim 11, further comprising:
The determining module is also used for determining the address and window information of the application to be tested through the test case; the acquisition module is used for acquiring an application interface of the application to be tested according to the address and the window information; the switching module is further configured to switch an original context of the application interface to the network context.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202310012600.9A 2023-01-05 2023-01-05 Method and device for testing application interface, computer equipment and storage medium Pending CN118295897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310012600.9A CN118295897A (en) 2023-01-05 2023-01-05 Method and device for testing application interface, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310012600.9A CN118295897A (en) 2023-01-05 2023-01-05 Method and device for testing application interface, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118295897A true CN118295897A (en) 2024-07-05

Family

ID=91675228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310012600.9A Pending CN118295897A (en) 2023-01-05 2023-01-05 Method and device for testing application interface, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118295897A (en)

Similar Documents

Publication Publication Date Title
US10776447B2 (en) Digital communications platform for webpage overlay
US11650910B2 (en) Automated testing method and apparatus, storage medium and electronic device
WO2021184725A1 (en) User interface test method and apparatus, storage medium, and computer device
US10108715B2 (en) Transformation and presentation of on-demand native application crawling results
CN106462555B (en) Method and system for WEB content generation
CN108182060B (en) Hybrid application point burying method, mobile terminal and system
US9026583B2 (en) Method and apparatus for polymorphic serialization
US20200073903A1 (en) Method and device of tagging links included in a screenshot of webpage
US9275165B2 (en) Method and apparatus for defining an application to allow polymorphic serialization
CN112817817B (en) Buried point information query method, buried point information query device, computer equipment and storage medium
CN110968314B (en) Page generation method and device
CN111797340B (en) Service packaging system for user-defined extraction flow
CN111597466A (en) Display method and device and electronic equipment
CN117093386B (en) Page screenshot method, device, computer equipment and storage medium
CN113485909A (en) Test method, test device, computing device, and medium
TW201523421A (en) Determining images of article for extraction
US20180357682A1 (en) Systems and methods for platform agnostic media injection and presentation
CN118295897A (en) Method and device for testing application interface, computer equipment and storage medium
US20180300301A1 (en) Enhanced inking capabilities for content creation applications
CN110955369B (en) Focus judgment method, device and equipment based on click position and storage medium
CN116595284B (en) Webpage system operation method, device, equipment, storage medium and program
CN117648510B (en) Information display method, information display device, computer equipment and storage medium
US11770437B1 (en) Techniques for integrating server-side and client-side rendered content
CN111870937B (en) Data processing method, simulation server and timeliness application
CN116974556A (en) File processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication