WO2023246666A1 - 一种搜索方法及电子设备 - Google Patents

一种搜索方法及电子设备 Download PDF

Info

Publication number
WO2023246666A1
WO2023246666A1 PCT/CN2023/100894 CN2023100894W WO2023246666A1 WO 2023246666 A1 WO2023246666 A1 WO 2023246666A1 CN 2023100894 W CN2023100894 W CN 2023100894W WO 2023246666 A1 WO2023246666 A1 WO 2023246666A1
Authority
WO
WIPO (PCT)
Prior art keywords
card
content
interface
web page
electronic device
Prior art date
Application number
PCT/CN2023/100894
Other languages
English (en)
French (fr)
Inventor
张宝丹
邵荣防
汪君鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023246666A1 publication Critical patent/WO2023246666A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present application relates to the field of computer technology, and in particular, to a search method and electronic equipment.
  • the terminal When a user enters keywords and searches through the search function of the terminal (such as the search function provided by a search engine), the terminal returns a web page list including summary information of multiple web pages. In order to obtain the required content, the user needs to click on the web page list one by one. Include the web page to view the detailed content of the web page.
  • a web page usually contains a lot of content, and terminals (especially terminals such as small mobile phones) can only display limited content at one time. Therefore, users need to spend a lot of time to find the required content. , the search efficiency is very low, and the user experience is poor.
  • This application discloses a search method and electronic device, which can integrate multiple contents in at least one web page and provide them to users, reducing the time for users to obtain required contents and improving search efficiency.
  • this application provides a search method, applied to electronic devices.
  • the method includes: obtaining a first keyword input by a user; and sending a first search request to a network device, where the first search request includes the first keyword. a keyword; receiving a search result set sent by the network device based on the first search request; displaying a first interface, the first interface including the search result set, the search result set including the first search result and second search results, the first search results are related to the first web page, and the second search results are related to the second web page; receive the first user operation; generate the first card set in response to the first user operation, so
  • the first card set includes a first card, the first card includes a first content and a second content, the first web page includes the first content, and the second web page includes the second content; After the first user operation, a second interface is displayed, and the second interface includes the first card.
  • the electronic device can automatically extract the first content in the first search result and the second content in the second search result, and display the extracted content through the first card in the first card set, so that the user can Through the first card set, you can quickly obtain the content in the search results that meets the user's search intention without having to click on multiple webpage cards and browse the entire webpage. This reduces the user operations and time spent when sorting the search results, and greatly improves search efficiency. efficiency.
  • both the first content and the second content are associated with the first keyword.
  • the similarity between the first content and the first keyword is greater than or equal to a preset threshold, and the similarity between the second content and the first keyword is greater than or equal to the preset threshold.
  • the content included in the first card in the first card set is related to the first keyword input by the user, which prevents the content in the search results that is irrelevant to the first keyword from affecting the user's search results, and further improves search efficiency.
  • the above method further includes: receiving a second user operation acting on the first search result in the first interface; displaying the first web page; receiving a third user operation; and responding The third user operates to generate a second card set, the second card set includes a second card, and the first card includes a third content and a fourth content, The third content and the fourth content are both the content of the first web page; a third interface is displayed, and the third interface includes the second card.
  • the third content and the fourth content are associated with the first keyword.
  • the similarity between the third content and the first keyword is greater than or equal to a preset threshold
  • the similarity between the fourth content and the first keyword is greater than or equal to the preset threshold.
  • the electronic device can filter out the third content and the fourth content associated with the first keyword from the first web page, and display the filter through the second card in the second card set.
  • the output content allows users to quickly obtain content in the first webpage that meets the user's search intention through the second card set without browsing the entire first webpage, reducing user operations and time spent when sorting search results, and improving Search efficiency.
  • the method further includes: receiving a user's selection operation on the first web page; obtaining first information according to the selection operation; receiving a fourth user operation; responding to the fourth The user operates to generate a third card.
  • the third card is a card in the second card set.
  • the third card includes fifth content and sixth content. Both the fifth content and the sixth content are the above-mentioned The content of the first web page, the fifth content and the sixth content are both associated with the first information; a fourth interface is displayed, and the fourth interface includes the third card.
  • the user can customize the first information in the first web page, and the electronic device can filter out the fifth content and the sixth content associated with the first information from the first web page, and collect them through the second card.
  • the third card displays the filtered content, and the user's customized selection of web content improves the flexibility of search, and does not require users to manually search for the fifth and sixth content and add these contents to storage locations such as notepads and notes. , reducing the user operations and time spent when sorting search results, making it more convenient and faster to use.
  • the first card set further includes a fourth card, the first card includes a first type of content, the fourth card includes a second type of content, and the first type Different from the second type mentioned.
  • the first card includes text type content and the fourth card includes picture type content.
  • generating the first card set includes: receiving a fifth user operation; responding to the fifth user operation, selecting the first search result and the first search result from the search result set. a second search result; generating the first card set according to the first search result and the second search result.
  • the first search result and the second search result used to generate the first card set can be customized by the user, which can meet the user's personalized needs and provide a better user experience.
  • the first content when the similarity between the first content and the first keyword is greater than the similarity between the second content and the first keyword, the first content is located at the before the second content; or, when the first search result is before the second search result, the first content is before the second content.
  • content that is more similar to the first keyword can be displayed at an earlier position, and the user can give priority to the content predicted by the electronic device that is more in line with the user's search intention, reducing the user's time to find the required content. time to further improve search efficiency.
  • the display order of the content in the card can be consistent with the display order of the corresponding search results, unifying the display effects of the search result set and the card set, and providing a better user browsing experience.
  • the method further includes: displaying a fifth interface, the fifth interface includes first controls, and the fifth interface is the desktop of the electronic device , negative one screen or collection interface of the first application, the first control is associated with the first card set; receiving the sixth user operation acting on the first control; A sixth interface is displayed, the sixth interface including the fifth card in the first card set.
  • the method further includes: displaying a user interface of the gallery, where the user interface of the gallery includes a first picture, the first picture and the first The card set is associated; a user operation for identifying the first picture is received; and the sixth interface is displayed.
  • the user can view the first card set again through the desktop, the negative screen, the favorites of the first application, or the picture in the gallery.
  • the acquisition methods are diverse and flexible, which increases the user's ability to view the first card again. Set probability and strong functional availability.
  • the method further includes: receiving a seventh user operation acting on the first content in the second interface; and displaying the first web page.
  • the seventh user operation is a click operation or a double-click operation.
  • the user can operate any content in the first card to view the web page to which the content belongs. There is no need for the user to manually search for the search results corresponding to the content and click on the search results, making it more convenient for the user.
  • the method further includes: receiving an eighth user operation acting on the first card in the second interface; and deleting the first card in response to the eighth user operation. ; Or, receive a ninth user operation acting on the first content in the second interface; delete the first content in response to the ninth user operation.
  • the eighth user operation is a user operation of dragging up or down, or the eighth user operation is a click operation on the delete control corresponding to the first card.
  • the ninth user operation is a click operation on the delete control corresponding to the first content.
  • the user can delete any card in the first card set, and the user can also delete any content in the first card to meet the user's personalized needs and improve the user experience.
  • the method further includes: receiving a tenth user operation acting on the first content in the second interface; responding to the tenth user operation, modifying the first content is the seventh content; the seventh content is displayed in the first card.
  • the user can modify any content in the first card without the user having to manually copy the content to a notepad or other location and then modify it, which reduces user operations, meets the user's personalized needs, and improves the user experience.
  • the first content in the first card is located before the second content; the method further includes: receiving the first content acting on the second interface The eleventh user operation; in response to the eleventh user operation, adjust the display positions of the first content and the second content in the first card; display the adjusted first card , the first content in the adjusted first card is located after the second content.
  • the eleventh user operation is a user operation of dragging the first content to the location of the second content.
  • the user can adjust the display order of the content in the first card to meet the user's personalized needs and improve the user experience.
  • the first card set further includes a sixth card; the method further includes: receiving a twelfth user operation acting on the first content in the second interface; responding The twelfth user operation moves the first content from the first card to the sixth card; in response to the twelfth user operation, the first card does not include the first content , the sixth card includes the first content.
  • the twelfth user operation is a user operation of dragging left or right.
  • the user can move the first content in the first card to other cards in the first card set to meet the user's personalized needs and improve the user experience.
  • the first card set further includes a seventh card; the method further includes: receiving A thirteenth user operation for the first card in the second interface; in response to the thirteenth user operation, the first card and the seventh card are merged into an eighth card, where The eighth card includes the content in the first card and the content in the seventh card.
  • the thirteenth user operation is a user operation of dragging the first card to the position of the seventh card.
  • the user can merge any two cards in the first card set to meet the user's personalized needs and improve the user experience.
  • the method further includes: receiving a fourteenth user operation; displaying a seventh interface, the seventh interface including the content of the cards in the first card set; receiving an action on the first card set; The fifteenth user operation on the seven interfaces; obtaining the eighth content included in the ninth card in the first card set according to the fifteenth user operation; receiving the sixteenth user operation; responding to the sixteenth user operation , generate a tenth card, the tenth card is a card in the first card set, the tenth card includes the eighth content; display an eighth interface, and the eighth interface includes the tenth card.
  • the user can select the eighth content included in the ninth card in the first card set, and generate the tenth card in the first card set based on the selected content, which meets the user's personalized needs and does not require the user to manually add the eighth card in the first card set.
  • the eighth content is added to the ten cards to reduce user operations and improve user experience.
  • the method further includes: saving information of the first card set, where the information of the first card set includes at least one of the following: the first keyword, the first The number of cards included in the card set, the content included in the cards in the first card set, the display position of the first card in the first card set, the display of the first content in the first card, Information about the first web page to which the first content belongs.
  • the present application provides yet another search method, applied to electronic devices.
  • the method includes: displaying a first web page; and obtaining first information related to the first web page, wherein the first information is search
  • the first keyword used by the first web page, or the first information is information obtained according to the user's selection operation on the first web page; receiving the first user operation; responding to the first user operation, Generate a first card set, the first card set includes a first card, the first card includes a first content and a second content, the first content and the second content are both of the first web page Content, the first content and the second content are both associated with the first information; after the first user operation, a first interface is displayed, and the first interface includes the first card.
  • the electronic device can obtain the first keyword (first information) entered by the user when searching or the first information customized by the user in the first web page, and automatically filter out the first keyword and the first information from the first web page.
  • the first content and the second content related to the information, and the filtered content is displayed through the first card in the first card set, so that the user can quickly obtain the content in the first web page that meets the user's search intention through the first card set. There is no need to browse the entire first web page, which reduces user operations and time spent when sorting search results, and improves search efficiency.
  • displaying the first web page includes: obtaining the first keyword input by the user; sending a first search request to a network device, where the first search request includes the first keyword word; receiving a search result set sent by the network device based on the first search request; displaying a second interface, the second interface including the search result set, the search result set including the first search result and the second Search results, the first search results are related to the first web page, and the second search results are related to the second web page; receive a second user operation that acts on the first search results; respond to the second user Operation: display the first web page.
  • both the first content and the second content are associated with the first information, including: the similarity between the first content and the first keyword is greater than or equal to A preset threshold is set, and the similarity between the second content and the first keyword is greater than or equal to the preset threshold.
  • the first information when the first information is information obtained according to a user's selection operation on the first web page, the first information includes text, pictures in the first web page , at least one of audio and video.
  • the user can customize the selection of the first information in the first web page, which improves the flexibility of search. Moreover, there is no need for the user to manually search for the first content and the second content associated with the first information in the first web page, and add these contents to storage locations such as notepads and notes, thereby reducing user operations performed by the user when sorting search results. And time spent, it is more convenient and faster to use.
  • the first card set when the first information includes at least one of text, pictures, audio and video in the first web page, the first card set also includes a second card, and the The first card includes content of a first type, the second card includes content of a second type, and the first type and the second type are different.
  • a first card includes audio type content and a second card includes video type content.
  • the first card in the first card A content is located before the second content; or, when the first content in the first web page is located before the second content, the first content in the first card is located before the second content.
  • content that is more similar to the first information can be displayed at a higher position, and the user can give priority to content predicted by the electronic device that is more in line with the user's search intention, reducing the time it takes the user to find the required content. , to further improve search efficiency.
  • the display order of the content in the card can be consistent with the display order in the web page, unifying the display effect of the web page and card set, and providing a better user browsing experience.
  • the method further includes: displaying a third interface, the third interface including a first control, and the third interface being the desktop of the electronic device , negative one screen or collection interface of the first application, the first control is associated with the first card set; receiving a third user operation acting on the first control; displaying a fourth interface, the fourth The interface includes a third card in the first set of cards.
  • the method further includes: displaying a user interface of the gallery, where the user interface of the gallery includes a first picture, the first picture and the first The card set is associated; a user operation for identifying the first picture is received; and the fourth interface is displayed.
  • the user can view the first card set again through the desktop, the negative screen, the favorites of the first application, or the picture in the gallery.
  • the acquisition methods are diverse and flexible, which increases the user's ability to view the first card again. Set probability and strong functional availability.
  • the method further includes: receiving a fourth user operation acting on the first content in the first interface; and displaying the first web page.
  • the fourth user operation is a click operation or a double-click operation.
  • the user can operate any content in the first card to view the web page to which the content belongs. There is no need for the user to manually search for the search results corresponding to the content and click on the search results, making it more convenient for the user.
  • the method further includes: receiving a fifth user operation acting on the first card in the first interface; and deleting the first card in response to the fifth user operation. ; Or, receive a sixth user operation acting on the first content in the first interface; delete the first content in response to the sixth user operation.
  • the fifth user operation is a user operation of dragging up or down, or the fifth user operation is a click operation on the delete control corresponding to the first card.
  • the sixth user operation is a click operation on the delete control corresponding to the first content.
  • the user can delete any card in the first card set, and the user can also delete any content in the first card to meet the user's personalized needs and improve the user experience.
  • the method further includes: receiving a seventh user operation acting on the first content in the first interface; responding to the seventh user operation, modifying the first content is the third content; display the third content in the first card.
  • the user can modify any content in the first card without the user having to manually copy the content to a notepad or other location and then modify it, which reduces user operations, meets the user's personalized needs, and improves the user experience.
  • the first content in the first card is located before the second content; the method further includes: receiving the first content acting on the first interface The eighth user operation; in response to the eighth user operation, adjust the display positions of the first content and the second content in the first card; display the adjusted first card, so The first content in the adjusted first card is located after the second content.
  • the eighth user operation is a user operation of dragging the first content to the location of the second content.
  • the user can adjust the display order of the content in the first card to meet the user's personalized needs and improve the user experience.
  • the first card set further includes a fourth card; the method further includes: receiving a ninth user operation acting on the first content in the first interface; responding to the The ninth user operation moves the first content from the first card to the fourth card; in response to the ninth user operation, the first card does not include the first content, the The fourth card includes the first content.
  • the ninth user operation is a user operation of dragging left or right.
  • the user can move the first content in the first card to other cards in the first card set to meet the user's personalized needs and improve the user experience.
  • the first card set further includes a fifth card; the method further includes: receiving a tenth user operation acting on the first card in the first interface; responding to the The tenth user operates to merge the first card and the fifth card into a sixth card, where the sixth card includes the content in the first card and the content in the fifth card.
  • the tenth user operation is a user operation of dragging the first card to the position of the fifth card.
  • the user can merge any two cards in the first card set to meet the user's personalized needs and improve the user experience.
  • the method further includes: receiving an eleventh user operation; displaying a fifth interface, the fifth interface including the content of the cards in the first card set; receiving an action on the first card set; A twelfth user operation on the fifth interface; obtaining the fourth content included in the seventh card in the first card set according to the twelfth user operation; receiving a thirteenth user operation; responding to the thirteenth user operation , generate an eighth card, the eighth card is a card in the first card set, the eighth card includes the fourth content; display a sixth interface, and the sixth interface includes the eighth card.
  • the user can select the fourth content included in the seventh card in the first card set, and generate the eighth card in the first card set based on the selected content, which meets the user's personalized needs and does not require the user to manually select the fourth card in the first card set.
  • the fourth content is added to the eight cards to reduce user operations and improve user experience.
  • the method further includes: saving the information of the first card set, the first card set
  • the information of the card set includes at least one of the following: information of the first web page, the first information, the number of cards included in the first card set, the content of the cards in the first card set, the The display position of the first card in the first card set, and the first content is displayed in the first card.
  • the application provides an electronic device, including a transceiver, a processor, and a memory.
  • the memory is used to store a computer program.
  • the processor calls the computer program to execute any possible method in any of the above aspects. Search method in implementation.
  • the present application provides an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled to one or more processors.
  • the one or more memories are used to store computer program codes.
  • the computer program codes include computer instructions.
  • the present application provides a computer storage medium that stores a computer program.
  • the computer program When the computer program is executed by a processor, it implements the search method in any of the possible implementations of any of the above aspects.
  • the present application provides a computer program product that, when run on an electronic device, causes the electronic device to execute the search method in any of the possible implementations of any of the above aspects.
  • the present application provides an electronic device, which includes executing the method or device described in any implementation manner of the present application.
  • the above-mentioned electronic device is, for example, a chip.
  • Figure 1A is a schematic architectural diagram of a search system provided by an embodiment of the present application.
  • Figure 1B is an interactive schematic diagram of a search system provided by an embodiment of the present application.
  • Figure 2A is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • Figure 2B is a schematic diagram of the software architecture of an electronic device provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a user interface of an implementation provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of a user interface according to another embodiment of the present application.
  • FIG. 5A is a schematic diagram of a user interface according to another embodiment of the present application.
  • FIG. 5B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 6 is a schematic diagram of a user interface according to another embodiment of the present application.
  • FIG. 7A is a schematic diagram of a user interface according to another embodiment of the present application.
  • FIG. 7B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 8 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 9 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 10 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 11 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 12 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 13 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 14 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 15 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 16 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 17A is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 17B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 18 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 19A is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 19B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 20 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 21 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 22 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 23 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 24 is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 25A is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 25B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 25C is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 26 is a schematic diagram of a user interface according to another embodiment of the present application.
  • FIG. 27A is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 27B is a schematic diagram of a user interface according to another embodiment of the present application.
  • Figure 28 is a schematic flow chart of a search method provided by an embodiment of the present application.
  • Figure 29 is a schematic diagram of the acquisition process of a first content collection provided by an embodiment of the present application.
  • Figure 30 is a schematic flowchart of yet another search method provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only and shall not be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of this application, unless otherwise specified, “plurality” The meaning is two or more.
  • search keywords When users use the search function of the terminal (such as the search function provided by a search engine), they hope to obtain content related to the search keywords (referred to as search keywords).
  • the content may include a piece of text in a web page, a picture, a Video or other type of content.
  • the search function returns a web page list including summary information of multiple web pages (which may be called web page cards), in which dynamic effects of key content of the web page can be output in the web page cards.
  • users In order to obtain webpage content that meets the search intent, users often need to click on multiple webpage cards included in the webpage list to view the detailed content of the corresponding webpage.
  • a webpage usually contains a lot of content, and terminals (especially smaller mobile phones and other terminals) The content displayed at one time is limited.
  • keywords in the web page content can be highlighted (such as highlighting)
  • users still need to browse the entire web page to see whether there is text, pictures, videos, etc. that match the search intent.
  • users need to perform multiple operations and spend a lot of time to find content that meets their search intentions.
  • the search efficiency is very low and the user experience is poor.
  • This application provides a search method and electronic device.
  • the electronic device After receiving the keyword input by the user (which may be referred to as the search keyword), the electronic device can obtain at least one web page based on the keyword, and generate at least one web page based on the at least one web page.
  • card and the at least one card may include content related to the search keyword in the at least one web page. At least one card can be provided to the user to view, modify and save.
  • the electronic device automatically compares and organizes the content of at least one web page, and integrates multiple contents related to the search keywords in the form of at least one card (referred to as a card set), allowing the user to pass
  • the card set quickly obtains the content/required content that meets the search intent, reduces the time users spend sorting out search results, improves search efficiency, and enhances user experience.
  • this method can be applied to the field of web search.
  • Web search can be based on the search keywords entered by the user, using specific algorithms and strategies to search for search results related to the search keywords (related to web pages) from the Internet, and then sorting these search results and displaying the search results list (also It can be called a search result set or a web page list), and the search result list can include summary information of the search results (which can be called a web page card). Users can operate any web page card to view the content page of the corresponding web page (used to display the detailed content of the web page). It is not limited to this, and this method can also be applied to other search fields, that is, the above search results may not be related to web pages, for example, related to products, which is not limited in this application.
  • this application takes the search results related to web pages (which may be referred to as search results as web pages in the following) as an example for description.
  • Touch operations in this application may include, but are not limited to: single-click, double-click, long press, single-finger long press, multi-finger long press, single-finger slide, multi-finger slide, knuckle slide, and other forms.
  • the sliding touch operation may be referred to as a sliding operation.
  • the sliding operation may be, for example, but not limited to, sliding left and right, sliding up and down, sliding to a first specific position, etc. This application does not limit the trajectory of the sliding operation.
  • the touch operation may be performed on a second specific location on the electronic device.
  • the above-mentioned specific location can be located on the display screen of the electronic device, such as where controls such as icons are located or on the edge of the display screen, or the specific location can also be located on the side, back, or other locations of the electronic device, such as the volume keys, power button, etc. key and other key positions.
  • the above-mentioned specific position is preset by the electronic device, or the specific position is determined by the electronic device in response to a user operation.
  • the drag operation in this application is a touch operation.
  • the drag operation acting on a certain control can be an operation of keeping touching the control and sliding, such as but not limited to single-finger drag, multi-finger drag, and knuckle drag. Movement and other forms.
  • the dragging operation is, for example but not limited to, dragging left and right, dragging up and down, dragging to a specific position, etc. This application does not limit the trajectory of the dragging operation.
  • the screenshot operation in this application can be used to select the display content in any area on the display screen of the electronic device, and save the display content to the electronic device in the form of a picture.
  • the display content may include at least one type of content, such as text content, picture content, and video content.
  • the screenshot operation may be, but is not limited to, a touch operation, voice input, motion gesture (such as gesture), brain wave, etc.
  • the screenshot operation may be a knuckle slide, or the screenshot operation may act on the power key and volume of the electronic device at the same time. Touch operation of keys.
  • the card set in this application may include at least one card.
  • the card in this application may be a control displayed in the form of a card. In specific implementation, it may also be displayed in other forms such as a floating frame. This application does not limit the specific display form of the card.
  • the card can be used to display webpage content for users to browse.
  • the electronic device can receive user operations on the card to view the original webpage to which the webpage content belongs, move, edit, delete and add, etc. Multiple operations.
  • the web page card in this application can be a control that displays the summary information of the search results (web page) in the form of a card. In specific implementation, it can also be displayed in other forms such as text boxes, which is not limited in this application.
  • the summary information of the web page is, for example, but not limited to, the web page content displayed at the top of the web page, the web page includes the content of search keywords, etc.
  • a search system 10 related to the embodiment of the present application is introduced below.
  • FIG. 1A illustrates an architectural schematic diagram of a search system 10 .
  • the search system 10 may include an electronic device 100 and a network device 200 .
  • Electronic device 100 can Via wired (e.g., universal serial bus (USB), twisted pair, coaxial cable, optical fiber, etc.) and/or wireless (e.g., wireless local area networks (WLAN), Bluetooth, and cellular communications communicate with the network device 200 via a network, etc.).
  • wired e.g., universal serial bus (USB), twisted pair, coaxial cable, optical fiber, etc.
  • wireless e.g., wireless local area networks (WLAN), Bluetooth, and cellular communications communicate with the network device 200 via a network, etc.
  • the electronic device 100 can be a mobile phone, a tablet computer, a handheld computer, a desktop computer, a laptop computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant) , PDA), as well as smart home devices such as smart TVs and projectors, wearable devices such as smart bracelets, smart watches, and smart glasses, augmented reality (AR), virtual reality (VR), mixed reality ( Mixed reality (MR) and other extended reality (XR) devices or vehicle-mounted devices, the embodiments of this application do not place special restrictions on the specific types of electronic devices.
  • the electronic device 100 can support a variety of applications, such as but not limited to: photography, image management, image processing, word processing, phone calls, email, instant messaging, network communication, media playback, positioning and Time management applications.
  • the network device 200 may include at least one server.
  • any server may be a hardware server.
  • any server may be a cloud server.
  • the network device 200 may be a Linux server, a Windows server, or other server devices that can provide simultaneous access by multiple devices.
  • the network device 200 may be a server cluster composed of multiple regions, multiple computer rooms, and multiple servers.
  • the network device 200 may support message storage and distribution functions, multi-user access management functions, large-scale data storage functions, large-scale data processing functions, data redundant backup functions, etc.
  • the electronic device 100 can communicate with the network device 200 based on a browser/server (B/S) architecture or a client/server (C/S) architecture.
  • the electronic device 100 may receive a keyword input by the user and request the network device 200 to obtain search results related to the keyword.
  • the electronic device 100 may display the search results obtained from the network device 200, for example, display a web page list.
  • FIG. 1B illustrates an interactive schematic diagram of a search system 10 .
  • the electronic device 100 in the search system 10 may include an input module 101 , a processing module 102 , a communication module 103 , an output module 104 , a storage module 105 and a shortcut entry 106 .
  • the network device 200 in the search system 10 may include a communication module 201 and a processing module 202, wherein:
  • the input module 101 is used to receive instructions input by the user, for example, search keywords input by the user, card set acquisition requests input by the user, or content selected by the user in the web content page (selected content for short).
  • the processing module 102 is used for the electronic device 100 to perform actions such as judgment, analysis, calculation, etc., and to send instructions to other modules to coordinate with each module to execute corresponding programs in an orderly manner, such as executing the methods shown in Figure 28 and Figure 30 below.
  • the communication module 103 is used for information transmission between the electronic device 100 and the network device 200 (implemented through the communication module 201).
  • the output module 104 is used to output information to the user, such as displaying search results or card sets to the user through a display screen.
  • the input module 101 can send the search keyword to the search module in the processing module 102, and the search module can send the search keyword to the communication module 201 of the network device through the communication module 103.
  • the communication module 201 may send a search request including the search keyword to the processing module 202 of the network device.
  • the processing module 202 can obtain the search results (ie, multiple web pages related to the search keywords) based on the search request, and send the search results to the communication module 103 through the communication module 201.
  • the communication module 103 can send the search results to the search module in the processing module 102.
  • the search module may send the search results to the output module 104 .
  • the output module 104 can output the search results, for example, display a web page list.
  • the input module 101 may send the card set acquisition request to the extraction module in the processing module 102 .
  • the extraction module receives the card set acquisition request, it can obtain the card set from The search module obtains the above search results and extracts the content of at least one web page from the search results, for example, extracts the content of the top N web pages from the multiple web pages obtained by the search (N is a positive integer), or , extract the content of the web page selected by the user from multiple web pages obtained by search.
  • the extraction module can send the content of the at least one web page to the semantic matching module in the processing module 102, and the semantic matching module can filter out content related to the search keywords from the content of the at least one web page.
  • the semantic matching module can send the above-mentioned content related to the search keywords to the card generation module in the processing module 102, and the card generation module can generate a card set based on these contents.
  • different cards in the card set include different types of content, specifically: Cards that include text, cards that include static images, cards that include dynamic images, cards that include videos, cards that include audio, etc.
  • the card generation module can send the generated card set to the output module 104, and the output module 104 can output the card set, for example, display the card set.
  • the input module 101 can send the selected content to the associated content acquisition module in the processing module 102 .
  • the associated content acquisition module can acquire content related to the selected content (which can be called associated content corresponding to the selected content) from the web content page, and send the selected content and associated content to the card generation module.
  • the card generation module may generate at least one card based on the selected content and associated content. For example, the card includes the selected content and corresponding associated content. The at least one card may belong to the above-mentioned card set.
  • the output module 104 may add and display the at least one card in the card set.
  • the card set generated by the processing module 102 can be sent to the storage module 105 for storage.
  • the card set information stored in the storage module 105 may include but is not limited to at least one of the following: identification information of the card set, search keywords, the number of cards included in the card set, the display position of the card in the card set, The name of the card, the text, pictures, videos and other web content included in the card, the display position of the web content in the card, and the address information of the web page to which the web content in the card set belongs, etc.
  • the shortcut entry 106 can be used to display an entry for reloading the card set.
  • the shortcut entry 106 can be in the favorites of the search application, the desktop of the electronic device 100, the user interface of the one-screen application, The user interface of the gallery application displays the entrance for secondary loading of the card set.
  • the search application can provide a search function (receiving search keywords and providing search results related to the search keywords), and a function of displaying a card set.
  • the shortcut entry 106 can obtain part or all of the information included in the card set from the storage module 105, and display an entry for reloading the card set based on the obtained information, for example, in the entry for reloading the card set.
  • the shortcut entry 106 after the shortcut entry 106 receives the instruction for loading the card set 1 corresponding to the entry 1, it can send the identification information of the card set 1 to the storage module 105.
  • the storage module 105 receives the identification of the card set 1. After receiving the information, all the information of the card set 1 corresponding to the identification information can be obtained and sent to the output module 104.
  • the output module 104 can output the card set 1 according to the received information of the card set 1, for example, display the card set 1.
  • At least one module among the input module 101, the processing module 102, the output module 104 and the storage module 105 may belong to a search application of the electronic device 100, where the search application may provide a search function, for example, the user may open
  • the electronic device 100 uses a search application and inputs search keywords based on the user interface of the search application.
  • the electronic device 100 can display search results related to the search keywords through the user interface of the search application.
  • the storage module 105 may be a memory of the electronic device 100, such as the internal memory 121 shown in FIG. 2A below.
  • the shortcut entry 106 may be a widget or other control provided by an application, for example, a widget in the form of a card provided by a negative-screen application.
  • the output module 104 may be a display component provided by an application program.
  • the application program that provides the output module 104 and the application program that provides the shortcut portal 106 are the same application program.
  • the shortcut portal 106 receives a message for loading. After the instruction of the card set, the card set can be output through the output module 104 belonging to the same application program.
  • FIG. 2A exemplarily shows a schematic diagram of the hardware structure of an electronic device 100.
  • the electronic device 100 shown in FIG. 2A is only an example, and the electronic device 100 may have more or fewer components than shown in FIG. 2A , two or more components may be combined, or Can have different component configurations.
  • the various components shown in Figure 2A may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , Antenna 1, Antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is a cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can separately couple the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 can couple the touch sensor 180K through the I2C interface, allowing the processor 110 to communicate with the touch sensor 180K.
  • the sensor 180K communicates through the I2C bus interface to implement the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100 .
  • the processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationships between the modules illustrated in the embodiment of the present invention are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142, it can also provide power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 can also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be configured through the antenna 1, the antenna 2, the mobile communication module 150, and the wireless communication module. Block 160, modem processor and baseband processor etc. are implemented.
  • the electronic device 100 may communicate with the network device 200 through a wireless communication function, such as sending a search request to the network device 200.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide solutions for wireless communication including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like.
  • the electronic device 100 can display the semantic convergence card set (including web page content that meets the search intent) through the display function, and reload the entry control of the semantic convergence card set.
  • the GPU is an image processing microprocessor and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc.
  • Display 194 includes a display panel. The display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active matrix organic light emitting diode active matrix organic light emitting diode
  • active-matrix organic light emitting diode active-matrix organic light emitting diode
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can implement the shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, etc. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In one implementation, the ISP may be provided in the camera 193.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, phone book, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can use an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, and an earphone.
  • the interface 170D and the application processor implement audio functions, such as playing audio content included in the semantic convergence card set.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In one implementation, the audio module 170 may be disposed in the processor 110 , or some functional modules of the audio module 170 may be disposed in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In another implementation, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In another implementation, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 170D is used to connect wired headphones.
  • the headphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA Cellular Telecommunications Industry Association of the USA
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be disposed on the display screen 194 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device 100 determines the intensity of the pressure based on the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 180B detects the angle at which the electronic device 100 shakes, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • Air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • Magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may utilize the magnetic sensor 180D to detect opening and closing of the flip holster.
  • the electronic device 100 may detect opening and closing of the flip top based on the magnetic sensor 180D. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices and can be used in both horizontal and vertical directions. Screen switching, pedometer and other applications.
  • Distance sensor 180F for measuring distance.
  • Electronic device 100 can measure distance via infrared or laser. In one embodiment, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outwardly through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect when the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in holster mode, and pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touching.
  • Fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to achieve fingerprint unlocking, access to application locks, fingerprint photography, fingerprint answering of incoming calls, etc.
  • Temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 reduces the performance of a processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 heats the battery 142 to prevent the low temperature from causing the electronic device 100 to shut down abnormally.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also known as "touch device”.
  • the touch sensor 180K can be disposed on the display screen 194.
  • the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near the touch sensor 180K.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 in a position different from that of the display screen 194 .
  • Bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human body's vocal part.
  • the bone conduction sensor 180M can also contact the human body's pulse and receive blood pressure beating signals.
  • the bone conduction sensor 180M can also be provided in an earphone and combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibrating bone obtained by the bone conduction sensor 180M to implement the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • the motor 191 can also respond to different vibration feedback effects for touch operations in different areas of the display screen 194 .
  • Different application scenarios such as time reminders, receiving information, alarm clocks, games, etc.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, Missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the layered architecture software system can be the Android system, the Harmony operating system (operating system, OS), or other software systems.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 2B exemplarily shows a schematic diagram of the software architecture of the electronic device 100 .
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system library, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, calendar, music, navigation, short message, search, gallery, negative screen and browser.
  • the search application can provide search functions.
  • the search can be an independent application, or it can be a functional component encapsulated by other applications such as a browser, which is not limited in this application.
  • the application package can also be replaced by other forms of software such as applets.
  • the following embodiments take a browser application integrating a search functional component as an example for description.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, content provider, view system, phone manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 .
  • call status management including connected, hung up, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of graphics or scroll bar text, such as notifications from applications running on your computer, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the following exemplifies the workflow of the software and hardware of the electronic device 100 in conjunction with a web page search scenario.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation and the control corresponding to the click operation as the search control of the browser application as an example, the browser calls the interface of the application framework layer, based on the key input by the user in the search box control of the browser. The word is searched, and then the display driver is started by calling the kernel layer, and the web page list obtained by the search is displayed on the display screen 194 . Users can operate any web page card in the web page list to view the detailed content of the corresponding web page to obtain content that meets the search intent.
  • Figure 3 exemplarily shows a schematic diagram of the user interface of a web page search process.
  • the electronic device 100 may display the user interface 310 of the browser.
  • the user interface 310 may include a search bar 311 and a search control 312 , where the search bar 311 is used to receive keywords input by the user, and the search control 312 is used to trigger a search operation for the keywords in the search bar 311 .
  • the electronic device 100 can receive the keyword "Xi'an travel route" entered by the user in the search bar 311, and then receive a touch operation (such as a click operation) on the search control 312. In response to the touch operation, search results related to the above keyword are obtained. At least one web page, and sort the at least one web page.
  • the electronic device 100 can display the sorted web pages. For details, see the user interface 320 shown in (B) of FIG. 3 .
  • the user interface 320 may include a search bar 321 and a web page list.
  • the currently searched keyword "Xi'an tourist route” is displayed in the search bar 321.
  • the user interface 320 shows the summary information of the top three web pages in the web page list (which can be called web page cards). In order from top to bottom, they are web page card 322, web page card 323 and web page card 324, where:
  • the web page card 322 is used to indicate the web page 1 with the title "Xi'an Tourist Route Map", the URL "Website 111" and the source "Source aaa”.
  • the web page card 323 is used to indicate the web page 2 with the title "Xi'an Travel Route-Guide", the URL "Website 222" and the source "Source bbb”.
  • Web card 324 is used to indicate that the title is "Xi'an Tourism Complete Guide”, the web page 3 with the URL “URL 3333” and the source "Source ccc”.
  • the navigation bar at the bottom of the user interface 320 includes a card control 325, which is used to trigger the display of a semantic convergence card set, the semantic convergence card
  • the set includes content related to the search keyword "Xi'an tourist route" in the above-mentioned web pages 1, 2 and 3, but does not include content irrelevant to the search keyword "Xi'an tourist route” in the above-mentioned web pages 1, 2 and 3. .
  • the display of the semantic convergence card set can also be triggered by the gesture or voice input by the user. , this application does not limit the way to trigger the display of the semantic convergence card set.
  • touch operations include but are not limited to: click, double-click, long press, single-finger long press, multi-finger long press, single-finger slide (including single-finger drag), multi-finger slide (including multi-finger drag) Or knuckle sliding, etc.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 325 in the user interface 320 shown in (B) of FIG. 3, and respond Upon this touch operation, a set of semantic convergence cards is displayed. For a specific example, see the user interface shown in Figure 4 .
  • a web page can include multiple types of content, such as text, pictures, videos, audio and other types of content.
  • any card in the semantic convergence card set may only include one type of content, for example, one card includes static pictures and another card includes dynamic pictures.
  • any card in the semantic convergence card set may include multiple different types of content, for example, a card may include text and pictures.
  • Figure 4 takes the semantic convergence card set as an example including three cards. These three cards include web content of text, picture and video types respectively (which may be referred to as text cards, picture cards and video cards respectively).
  • the semantic convergence card set may include more or fewer cards.
  • the semantic convergence card set may only include text cards, or the semantic convergence card set may only include picture cards and video cards. This application does not limit this. .
  • the cards included in the semantic convergence card set are set by the user, such as the example shown in Figure 5B below.
  • the cards included in the semantic convergence card set are generated by the electronic device based on the content of the web page. For example, there is no text related to the search keyword "Xi'an travel route" in the above-mentioned web pages 1, 2 and 3. Content, therefore, the semantic convergence card set only includes picture cards and video cards.
  • the electronic device 100 may display the user interface 410 of the browser.
  • the user interface 410 is used to display text cards in the semantic convergence card set (used to display text-type web page content).
  • User interface 410 may include title 411, text card 412, page options 413, save control 414, and new control 415.
  • Title 411 is used to display the currently searched keyword "Xi'an travel route”.
  • Text card 412 includes a title ("text card") and a plurality of text content.
  • the multiple text contents displayed in the text card 412 are arranged from top to bottom as follows: text content 4121 ("Popular tourist routes include!”), Text content 4122 ("Travel route: Xicang-Muslim Street") and text content 4123 ("Attraction route one: Shaanxi History Museum-Bell and Drum Tower"), the source information 4121A in the text card 412 is used to indicate the text
  • the content 4121 is the text included in the web page 1 with the URL "Website 111" and the source "Source aaa”.
  • the source information 4122A in the text card 412 is used to indicate that the text content 4122 is the URL "Website 333" and the source "Source ccc" "The text included in the web page 3
  • the source information 4123A in the text card 412 is used to indicate that the text content 4123 is the text included in the web page 2 with the URL "website 222" and the source "source bbb”.
  • Page option 413 includes three options (expressed as three circles), of which the first option is selected (expressed as a black circle), and the other two options are unselected (expressed as a white circle). shape), it can represent that the semantic convergence card set includes three cards, and the text card 412 is the first card in the semantic convergence card set.
  • the save control 414 is used to trigger saving of the semantic convergence card set.
  • the new control 415 is used to trigger the creation of new cards in the semantic convergence card set.
  • the electronic device 100 may receive a sliding operation for the text card 412.
  • the sliding operation may be sliding up and down or sliding left and right.
  • FIG. 4 takes the sliding operation from right to left as an example.
  • the electronic device 100 responds to This sliding operation displays the user interface 420 shown in (B) of FIG. 4 .
  • the user interface 420 is similar to the user interface 410 shown in (A) of FIG. 4 , except that the user interface 420 is used to display picture cards in the semantic convergence card set (used to display picture-type web page content).
  • the picture card 421 in the user interface 420 includes a title ("picture card") and a plurality of picture contents.
  • the plurality of picture contents include, for example, picture content 4211 and picture content 4212.
  • the source information 4211A in the user interface 420 is used to indicate the picture content 4211 It is a picture included in the web page 1 with the URL "Website 111" and the source "Source aaa”.
  • the source information 4212A in the user interface 420 is used to indicate that the picture content 4212 is the URL "Website 222" and the source "Source bbb”.
  • Page 2 includes pictures.
  • the second option included in the page options 413 in the user interface 420 is in a selected state, and the other two options are in an unselected state, which can represent that the picture card 421 is the second card in the semantic convergence card set.
  • the electronic device 100 may receive a sliding operation for the picture card 421.
  • the sliding operation may be sliding up and down or sliding left and right.
  • FIG. 4 takes the sliding operation as sliding from right to left as an example.
  • the electronic device 100 displays the user interface 430 shown in (C) of FIG. 4 .
  • the user interface 430 is similar to the user interface 410 shown in (A) of FIG. 4 , except that the user interface 430 is used to display video cards in the semantic convergence card set (used to display video type web content).
  • the video card 431 in the user interface 430 includes a title ("video card") and a plurality of video contents.
  • the plurality of video contents include, for example, video content 4311 and video content 4312.
  • the source information 4311A in the user interface 430 is used to indicate the video content 4311 It is a video included in the web page 2 with the URL "URL 222" and the source "source bbb".
  • the source information 4312A in the user interface 430 is used to indicate that the video content 4312 is the URL "URL 111" and the source "source aaa”.
  • Page 1 includes the video.
  • the third option included in the page options 432 in the user interface 430 is in a selected state, and the other two options are in an unselected state, which can represent that the video card 431 is the third card in the semantic convergence card set.
  • the semantic convergence card set may include all or part of the content of the search results.
  • the picture cards in the semantic convergence card set include all or part of the pictures in the search results.
  • the search results are web page 1, web page 2 and web page 3 described in Figure 3
  • the text card 412 in the user interface 410 shown in (A) of Figure 4 can include the text content in these three web pages.
  • the picture card 421 in the user interface 420 shown in (B) only includes the picture content in web pages 1 and 2, and does not include the picture content in web page 3.
  • the video card 431 only includes the video content in webpage 1 and webpage 2, but does not include the video content in webpage 3.
  • the image content and video content in web page 3 are less relevant to the search keywords. Therefore, the image card 421 does not include the image content in web page 3, and the video card 431 does not include the video content in web page 3.
  • the information of the web page to which the web page content in the semantic convergence card set belongs may not be displayed.
  • the text card 412 in does not include: source information 4121A indicating the web page 1 to which the text content 4121 belongs. This application does not limit the specific display method of the semantic convergence card set.
  • the content included in the semantic convergence card set may be content extracted from more or less web pages.
  • the electronic device 100 can extract the top N web pages from at least one web page obtained by searching, and use the content related to the search keywords in these N web pages as the content in the semantic convergence card set, where N is a positive integer.
  • the content included in the semantic convergence card set may be content extracted from at least one web page selected by the user and related to the search keywords.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 325 in the user interface 320 shown in (B) of FIG. 3, and in response to the touch operation, A user interface for selecting a web page is displayed.
  • a touch operation such as a click operation
  • a user interface for selecting a web page is displayed.
  • the user interface 510 is similar to the user interface 320 shown in (B) of FIG. 3 .
  • the plurality of webpage cards in the user interface 510 include, for example, a webpage card 511 indicating webpage 1 and a webpage card 511 indicating webpage 2 . and a web card 513 indicating web page 3.
  • a selection control is also displayed in any web page card, and any selection control is used to select the corresponding web page (that is, the web page indicated by the web page card where it is located) or cancel the selection.
  • the selection control 511A and the selection control 513A are in a selected state, and the selection control 512A is in an unselected state, which can indicate that the web page 1 corresponding to the selection control 511A and the web page 3 corresponding to the selection control 513A have been selected.
  • the function bar at the bottom of the user interface 510 includes a select all control 514 , a card control 515 and an exit control 516 .
  • the select all control 514 is used to select web pages indicated by all web page cards displayed on the user interface 510 .
  • the card control 515 is used to trigger the display of a semantic convergence card set, where the content included in the semantic convergence card set is extracted from the above-mentioned user-selected web pages 1 and 3, and the card control 325 in the user interface 320 triggers the display.
  • the content included in the semantic aggregation card set is extracted from at least one web page obtained by the search (the above-mentioned examples include web page 1, web page 2, and web page 3 ranked in the top three).
  • Exit control 516 is used to exit the function of using the selected web page.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 515, and in response to the touch operation, extract content related to the search keywords from the selected web pages 1 and 3, and use it as a semantic aggregation card set. Content.
  • the electronic device 100 may display the semantic convergence card set, for example, display the user interface 520 shown in (B) of FIG. 5A .
  • the user interface 520 is similar to the user interface 410 shown in (A) of FIG. 4 , except that the text content included in the text card is different.
  • the text card 521 in the user interface 520 includes the text content 4121 in the user interface 410, source information 4121A (indicating that the text content 4121 is text included in web page 1), text content 4122, and source information 4122A (indicating that text content 4122 is web page 3 included text), excluding text included in web page 2 not selected by the user (such as text content 4123 in user interface 410).
  • the card type, card number, and/or the number of contents included in the card set included in the semantic convergence card set may be preset by the electronic device.
  • the type of cards, the number of cards, and/or the number of contents included in the cards included in the semantic convergence card set may be determined by the electronic device in response to a user operation.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 325 in the user interface 320 shown in (B) of FIG. 3 , and respond to the Touch operation displays the user interface 530 shown in FIG. 5B.
  • the user can also enter the user interface 530 shown in Figure 5B through card settings.
  • the user interface 530 includes a setting interface 531.
  • the setting interface 531 includes a title ("Card Personalization Settings"), a setting description 5311 ("The type of required card and the amount of content included in the card can be set") and multiple card options. Multiple card options include, for example, text card options 5312, picture card options 5313, video card options 5314, audio card options 5315, document card options 5316, etc., where the audio card is a card that includes audio-type web content.
  • the document card is a card that includes web content of document type (such as documents in word, excel, and ppt formats).
  • a selection control is displayed in any option, and any selection control is used to select to add the corresponding card to the semantic convergence card set or cancel the selection.
  • the selection control 5312A displayed in option 5312 is used to set to add the corresponding text card in the semantic convergence card set or to cancel the setting.
  • the content option 5312B may also be displayed in the option 5312.
  • the content option 5312B includes the characters "including at most... text content", wherein a setting control 5312C is displayed at "", and the setting control 5312C is located in the middle
  • the box is used to display the maximum number of text contents included in the currently set text card.
  • the box on the left in the setting control 5312C is used to increase the maximum number of text contents included in the text card.
  • the box on the right in the setting control 5312C is used to display the maximum number of text contents included in the currently set text card. Used to reduce the maximum amount of text content a text card can contain.
  • the selection controls displayed in options 5312, 5313 and 5314 are all selected, and the selection controls displayed in options 5315 and 5316 are all unselected, which can represent the current settings added in the semantic convergence card set.
  • Text cards and options corresponding to option 5312 The picture card corresponding to 5313 and the video card corresponding to option 5314.
  • the setting control 5312C in option 5312 displays the number 3, which can represent that the maximum number of contents included in the text card is set to 3
  • the setting control 5313 in option 5313 displays the number 2, which can represent the maximum number of contents included in the picture card.
  • the value is set to 2
  • the setting control in option 5314 displays the number 2
  • the maximum value that can represent the amount of content included in the video card is set to 2.
  • the user interface 530 also includes a confirmation control 5317 and a cancel control 5318 for canceling the display of the semantic convergence card set.
  • the electronic device 100 may receive a touch operation (such as a click operation) on the determination control 5317, and in response to the touch operation, display the semantic convergence card set set above. For details, see the semantic convergence card set shown in FIG. 4 .
  • the user interface 530 shown in FIG. 5B may also be displayed on a setting interface such as a system setting interface of the electronic device 100 or a setting interface of a browser application.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of FIG. 4 ), it may receive a touch operation (for example, a double-click) on any content in the semantic convergence card set. operation), in response to the touch operation, display the detailed content of the web page to which the content belongs. See Figure 6 for a specific example.
  • a touch operation for example, a double-click
  • the electronic device 100 can display the user interface 410 shown in (A) of FIG. 4 , and the source information 4121A in the user interface 410 is used to indicate that the text content 4121 is a website address of "website address 111" and Text included in web page 1 with source "source aaa”.
  • the electronic device 100 may receive a touch operation (such as a double-click operation) for the text content 4121, and in response to the touch operation, display the detailed content of the web page 1 to which the text content 4121 belongs. For a specific example, see the user shown in (B) of FIG. 6 Interface 600.
  • the user interface 600 may include a search bar 601 in which the URL of the currently displayed web page 1 and the source "URL 111 (source aaa)" are displayed.
  • the user interface 600 also includes the title 602 of the web page 1 ("Xi'an Tourism Route Map") and various types of content such as text and pictures.
  • the text content 4121 targeted by the above touch operation is highlighted in the user interface 600 (for example, highlighted) .
  • the user can obtain information related to the above text content 4121 through the user interface 600, such as the display position and context in the web page 1.
  • the electronic device 100 responds to the above-mentioned touch operation on any content in the semantic convergence card set and displays web page content related to the content.
  • the web page content can be located to the page to which the web page content belongs.
  • the electronic device 100 may display the user interface 2540 shown in FIG. 25C in response to a touch operation on the associated content 2512B in the user interface 2530 shown in FIG. 25B , and the associated content 2512B is located in the user interface 2540
  • the associated content 2512B shown is above the web page 1 to which it belongs.
  • the web page content can be highlighted in the web page to which the web page content belongs.
  • the electronic device 100 may display the user interface 600 shown in FIG.
  • Text content 4121 shown in 600 is highlighted in web page 1 to which it belongs.
  • the electronic device 100 may display the user interface 2540 shown in FIG. 25C in response to a touch operation acting on the associated content 2512B in the user interface 2530 shown in FIG. 25B , and the associated content 2512B shown in the user interface 2540 The page to which it belongs is highlighted in 1.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of FIG. 4 ), it may receive a touch operation (for example, upward) for any card in the semantic convergence card set. or downward drag operation, or click operation), in response to the touch operation, the card is deleted from the semantic convergence card set.
  • a touch operation for example, upward
  • the electronic device 100 may receive a touch operation (for example, upward) for any card in the semantic convergence card set. or downward drag operation, or click operation), in response to the touch operation, the card is deleted from the semantic convergence card set.
  • the electronic device 100 may display the user interface 410 shown in (A) of FIG. 4 , which is used to display the text card 412 in the semantic convergence card set.
  • the electronic device 100 may receive a drag operation for the text card 412.
  • the drag operation may be dragging up and down or sliding left and right.
  • FIG. 7A shows the drag operation.
  • the drag operation is dragging from bottom to top as an example.
  • the electronic device 100 deletes the text card 412 in the semantic convergence card set.
  • other cards in the semantic convergence card set can be displayed, for example, the user interface 700 shown in (B) of FIG. 7A is displayed.
  • the user interface 700 Used to display the picture card 421 in the semantic convergence card set.
  • the page options 701 in the user interface 700 only include two options, and the first option is selected, which can represent that the semantic convergence card set includes two cards, and Picture card 421 is the first of these two cards.
  • the electronic device 100 can display the user interface 410 shown in (A) of FIG. 4 .
  • the user interface 410 is used to display the text card 412 in the semantic convergence card set when the card is in an editable state.
  • the user selects a control to edit the card, or makes the card editable through other means, and a delete control 4124 is displayed in the text card 412 .
  • the electronic device 100 may receive a touch operation (such as a click operation) for the delete control 4124, and in response to the touch operation, delete the text card 412 in the semantic convergence card set.
  • a touch operation such as a click operation
  • the user interface 700 shown in (B) of FIG. 7A is similar to the user interface 700 shown in (B) of FIG. 7A .
  • the picture card 421 displayed in the user interface 700 shown in (B) of FIG. 7B may also display Delete control, used to trigger deletion of picture card 421.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of FIG. 4 ), it may receive a touch operation (for example, a long Press operation), in response to the touch operation, a user interface for editing the card (referred to as the editing interface) is displayed. See Figure 8 for a specific example.
  • a touch operation for example, a long Press operation
  • the electronic device 100 may display the user interface 410 shown in (A) of FIG. 4 .
  • the electronic device 100 may receive a long press operation for the text card 412 in the user interface 410.
  • the long press operation is a single-finger long press or a two-finger long press, etc.
  • (B of FIG. 8 is displayed) ) as shown in the user interface 800.
  • the user interface 800 is used to implement the editing function of the text card 412.
  • a delete control is displayed in any text content included in the text card 412 shown in the user interface 800.
  • the text content 4121 is used as an example for explanation. Other text content is similar.
  • a delete control 810 is displayed in 4121 for deleting the text content 4121 in the text card 412.
  • the user interface 800 also includes a confirm control 820 and a cancel control 830.
  • the confirm control 820 is used to save the current editing operation (ie, save the text card 412 displayed in the user interface 800), and the cancel control 830 is used to cancel the current editing operation (ie, save the text card 412 displayed in the user interface 800). That is, the text card before editing is saved 412).
  • the above editing operations please refer to the editing operations shown in Figures 9 to 12 below.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the text content 4121 in the user interface 800 shown in (B) of FIG. 8, and respond Upon this touch operation, the editing interface of the text content 4121 is displayed. For a specific example, see the user interface 910 shown in (A) of FIG. 9 .
  • the user interface 910 is used to implement the editing function of the text content 4121 in the text card 412 .
  • a cursor 911 is displayed in front of the characters "popular travel routes include" included in the text content 4121, and the cursor 911 is used to indicate the insertion point of text editing.
  • the electronic device 100 can receive characters input by the user and display the characters in front of the cursor 911.
  • the electronic device 100 can receive the characters "Option 1:” input by the user and display the user interface 920 shown in (B) of Figure 9 .
  • the text content 4121 in the text card 412 shown in the user interface 920 includes the characters "Plan 1: Popular travel routes include", where there is displayed between the characters "Plan 1:” and the characters "Popular travel routes include" Cursor 911.
  • the electronic device 100 when the electronic device 100 displays the user interface 910 shown in (A) of FIG. 9 , it can also delete the text content in response to the user operation (such as "delete" of voice input). 4121 includes characters, this application does not limit the specific editing method of any content in the card set.
  • the electronic device 100 may receive the user interface 800 shown in (A) of FIG. 10 (ie, the user interface 800 shown in (B) of FIG. 8 ).
  • a touch operation (such as a click operation) of the delete control 810 in the user interface 800 is performed.
  • the text content 4121 in the user interface 800 is deleted.
  • electronic equipment 100 can display the user interface 1000 shown in (B) of FIG. 10 .
  • the text card 412 in the user interface 1000 includes text content 4122 and text content 4123, but does not include text content 4121.
  • the electronic device 100 may receive a touch operation (eg, downward or upward) for any text content in the user interface 800 shown in (B) of FIG. 8 drag operation), in response to the touch operation, adjust the display position of the text content in the text card (which can also be understood as adjusting the arrangement order of the text content in the text card). See Figure 11 for a specific example.
  • a touch operation eg, downward or upward
  • adjust the display position of the text content in the text card which can also be understood as adjusting the arrangement order of the text content in the text card. See Figure 11 for a specific example.
  • the electronic device 100 may display the user interface 800 shown in (B) of FIG. 8 .
  • the electronic device 100 can receive a drag operation acting on the text content 4122 in the text card 412 shown in the user interface 800.
  • the drag operation is dragging up and down or dragging left and right.
  • the drag operation is The user operation of dragging the text content 4122 upward in the text card 412 shown in the user interface 800 to the position where the text content 4121 is located is taken as an example.
  • the electronic device 100 exchanges the display positions of the text content 4121 and the text content 4122, and at this time, the user interface 1100 shown in (B) of FIG. 11 can be displayed.
  • the electronic device 100 may display the user interface 1100 in response to a user operation of dragging the text content 4121 in the user interface 800 downward to the location of the text content 4122.
  • the electronic device 100 may receive a touch operation (such as left or right) for any text content in the user interface 800 shown in (B) of FIG. 8 Drag operation on the right), in response to the touch operation, adjust the display position of the text content in the semantic convergence card set. See Figure 12 for a specific example.
  • the electronic device 100 may display a user interface 1210 , and the user interface 1210 may include a text card 412 and a preview card 1211 in the user interface 800 shown in (B) of FIG. 8 .
  • the user interface 1210 may be a user interface displayed by the electronic device 100 in response to a drag operation on the text content 4121 in the user interface 800 shown in (B) of FIG. 8 , for example, the drag operation is used to drag the user
  • the text content 4121 in the interface 800 is dragged to the right to the position where the text content 4121 in the user interface 1210 is located.
  • the preview card 1211 is used to indicate the picture card arranged behind the text card 412 in the semantic convergence card set.
  • the electronic device 100 can receive a drag operation acting on the text content 4121.
  • the drag operation is dragging up and down or dragging left and right.
  • the drag operation is a user operation of dragging toward the right edge of the screen ( It can also be understood as a user operation of dragging the text content 4121 to the location of the preview card 1211) as an example.
  • the electronic device 100 moves the text content 4121 to the picture card indicated by the preview card 1211 for display.
  • the user interface 1220 shown in (B) of FIG. 12 Different from the picture card 421 in the user interface 420 shown in (B) of FIG. 4 , the picture card 421 in the user interface 1220 includes not only picture content 4211 and picture content 4212 (not shown), but also text content 4121.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of FIG. 4 ), it may receive a touch operation (for example, to any card in the semantic convergence card set). Drag operation to the left or right), in response to the touch operation, the card and other cards are merged into one card. See Figure 13 for a specific example.
  • the electronic device 100 may display the user interface 410 shown in (A) of FIG. 4 .
  • the electronic device 100 can receive a drag operation acting on the text card 412 in the user interface 410.
  • the drag operation is dragging up and down or dragging left and right.
  • the drag operation is used to move the text card 412 to the screen.
  • the user operation of dragging the right edge is taken as an example.
  • the electronic device 100 displays the user interface 1310 shown in (B) of FIG. 13 . It should be noted that when the electronic device 100 displays the user interface 1310, the user's finger still touches the screen and is located on the text card.
  • text card 412 is located inside and displayed above picture card 421 in user interface 1310 .
  • the electronic device 100 can merge the text card 412 and the picture card 421 in response to the user's release operation (that is, the finger leaves the screen, which can be understood as part of the above-mentioned drag operation), for example, move the content included in the text card 412 to the picture card 421
  • the user interface 1320 shown in (C) of FIG. 13 can be displayed.
  • the user interface 1320 may include a new card 1321 and a page option 1322, which is used to indicate that the semantic convergence card set includes two cards, and the currently displayed new card 1321 is the first card of the two cards.
  • the new card 1321 is a card obtained by merging the text card 412 and the picture card 421. It not only includes the picture content 4211 and the picture content 4212 in the picture card 421, but also includes the text content 4121, text content 4122 and text content 4123 in the text card 412.
  • the user interface 1320 shown in (D) of Figure 13 may be a user interface displayed by the electronic device 100 in response to a sliding operation for the new card 1321 in the user interface 1320 shown in (C) of FIG. 13 , for example, the sliding operation is Slide left and right or slide up and down.
  • Figure 13 takes the sliding operation as sliding from bottom to top as an example.
  • the electronic device 100 may receive a touch operation (such as dragging left or right) for any card in the semantic convergence card set. operation), in response to the touch operation, adjusting the display position of the card in the semantic convergence card set, which can be understood as, when the electronic device displays the editing interface of the card, in response to the touch operation on the card, adjusting the display of the card Location.
  • the electronic device 100 may switch the display positions of the text card 412 and the picture card 421 in response to a user operation of dragging the text card 412 to the right in the user interface 800 shown in (B) of FIG. 8 .
  • the picture card 421 is the first card
  • text card 412 is the second card.
  • the electronic device when the electronic device displays the editing interface of the card, in response to a touch operation on the card (such as a drag operation to the left or right), the card and other cards into one card; when the electronic device displays the semantic convergence card set, in response to a touch operation (such as a drag operation to the left or right) on any card in the semantic convergence card set, adjust the position of the card in the semantic convergence card set.
  • the display position of the semantic convergence card set is not limited in this application to the user operation that triggers the adjustment of the card.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of Figure 4 ), it may receive a touch operation (such as a click operation) for the newly added control 415 and respond Based on this touch operation, a new card is created in the semantic convergence card set. See Figure 14 for a specific example.
  • a touch operation such as a click operation
  • the electronic device 100 may display the user interface 410 shown in (A) of FIG. 4 .
  • the electronic device 100 may receive a touch operation (such as a click operation) for the newly added control 415 in the user interface 410, and in response to the touch operation, create a new card that does not include any content.
  • (B) of FIG. 14 may be displayed.
  • User interface 1400 is shown.
  • User interface 1400 may include card 1410 and page options 1420.
  • Card 1410 may include a title ("Custom Card 1") and add control 1410A for adding content to card 1410.
  • the page option 1420 includes four options, and the fourth option is selected, which can represent that the semantic convergence card set includes four cards, as well as the currently displayed card. 1410 is the fourth card among these four cards.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the add control 1410A in the user interface 1400 shown in (B) of FIG. 14, and respond Upon this touch operation, user interface 1510 shown in (A) of FIG. 15 is displayed.
  • a touch operation such as a click operation
  • the user interface 1510 includes a title 1511 ("Customized Card 1"), which may indicate that the user interface 1510 is used for the card titled "Customized Card 1" (referred to as Customized Card 1). New content is added in, and the custom card 1 is the card 1410 in the user interface 1400 shown in (B) of Figure 14 .
  • User interface 1510 also includes a determination control 1512 and a plurality of card options. Multiple card options include, for example, text card options 1513 and picture card options 1514 and video card options 1515, the selection control displayed in any option is used to select to add the content included in the corresponding card to the custom card 1 or to cancel the selection.
  • the selection control 1513A displayed in option 1513 and the selection control 1514A displayed in option 1514 are in a selected state, and the selection control 1515A displayed in option 1515 is in an unselected state, which can represent the current selection of text cards and picture cards. Included content is added to Custom Card 1.
  • the determination control 1512 is used to trigger adding the content included in the selected card to the custom card 1.
  • the electronic device 100 may receive a touch operation (for example, a click operation) on the determination control 1512 and, in response to the touch operation, display the user interface 1520 shown in (B) of FIG. 15 .
  • the cards 1410 in the user interface 1520 include: the text content 1521A included in the selected text card (ie, the text card 412 in the user interface 410 shown in (A) of FIG. 4 ), and the selected picture card (ie, the text card 412 in the user interface 410 shown in FIG. 4(A) ).
  • the picture content 1521B included in the text card 421) in the user interface 420 shown in (B) does not include the video content included in the unselected video card.
  • Card 1410 in user interface 1520 also includes add controls 1410A, which can be used to continue adding content to card 1410.
  • the card 1410 in the user interface 1520 can also be replaced with a new card 1321 in the user interface 1320 shown in (C) and (D) of Figure 13 (in this case, the title needs to be changed to "Customized Card 1").
  • the new card 1321 in the user interface 1320 shown in (C) and (D) of Figure 13 can also be replaced with the card 1410 in the user interface 1520 (in this case, the title needs to be Change to "New Card”).
  • the electronic device 100 may receive a touch operation (such as a click operation) for the add control 1410A in the user interface 1400 shown in (B) of FIG. 14, and respond Upon this touch operation, user interface 1610 shown in (A) of FIG. 16 is displayed.
  • a touch operation such as a click operation
  • the user interface 1610 includes a title 1611 ("Customized Card 1").
  • the user interface 1610 is similar to the user interface 1510 shown in (A) of Figure 15 , both of which are used to customize the card. The difference is that the user interface 1610 can also add any content included in the card to the custom card 1.
  • the user interface 1610 includes a determination control 1612 and a plurality of card options.
  • the plurality of card options include, for example, a text card option 1613, a picture card option 1614, and a video card option 1615. Any one of the options displays at least one of the corresponding card options. content, and the selection control corresponding to each content. Any selection control is used to add the corresponding content to the custom card 1 or cancel the selection.
  • the option 1613 displays the text content 4121, text content 4122 and text content 4123 included in the text card, as well as the selection controls corresponding to the three text contents respectively, for example, the selection control 1613A corresponding to the text content 4121.
  • the selection control 1613A corresponding to the text content 4121, the selection control 1614A corresponding to the picture content 4211 in the option 1614 of the picture card 421 are in a selected state, and other selection controls are in an unselected state, which can represent that the current selection of the text content 4121 and picture content 4211 added to custom card 1.
  • the electronic device 100 may receive a touch operation (for example, a click operation) on the determination control 1612 and, in response to the touch operation, display the user interface 1620 shown in (B) of FIG. 16 .
  • the card 1410 in the user interface 1620 includes the above-mentioned selected text content 4121 and the above-mentioned selected image content 4211, and does not include other unselected content.
  • Card 1410 in user interface 1520 also includes add controls 1410A, which can be used to continue adding content to card 1410.
  • Figure 15 or Figure 16 can also be followed to add content to cards such as text cards, picture cards, and video cards in the semantic convergence card set.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of Figure 4 ), it can receive a touch operation for the newly added control 415 (for example, a click operation), and in response to the touch operation, the content selection interface shown in (A) of FIG. 15 or (A) of FIG. 16 is displayed.
  • the electronic device 100 can generate a new card based on the webpage content selected by the user based on the content selection interface, such as the card 1410 in the user interface 1520 shown in (B) of Figure 15, or the user interface shown in (B) of Figure 16 Card 1410 in 1620.
  • the electronic device 100 may receive a touch operation (such as a drag operation to the left or right) for any content in the card, and in response to the touch operation, The content is moved to the card 1410 in the user interface 1400 shown in (B) of FIG. 14 for display. See FIG. 17A for a specific example.
  • a touch operation such as a drag operation to the left or right
  • the content is moved to the card 1410 in the user interface 1400 shown in (B) of FIG. 14 for display. See FIG. 17A for a specific example.
  • the electronic device 100 may display the user interface 1210 shown in (A) of FIG. 12 , and the text card 412 in the user interface 1210 includes text content 4121.
  • the electronic device 100 can receive a drag operation acting on the text content 4121.
  • the drag operation is dragging left and right or dragging up and down.
  • the drag operation is dragging the text content 4121 toward the right edge of the screen.
  • the user operation is illustrated as an example.
  • the electronic device 100 moves the text content 4121 to the card 1410 in the user interface 1400 shown in FIG. 14(B) for display.
  • FIG. 17A(B) 1710 see the user interface shown in FIG. 17A(B) 1710.
  • the card 1410 in the user interface 1710 includes text content 4121 and an adding control 1410A.
  • the adding control 1410A is used, for example, to add content to the card 1410 in the manner shown in FIG. 15 or FIG. 16 .
  • the electronic device 100 can display a user interface 1720.
  • the user interface 1720 is similar to the user interface 800 shown in (B) of Figure 8. The difference is that the text card 412 shown in the user interface 1720 includes There is a selection control displayed in any text content.
  • the selection control 1721 displayed in the text content 4121 is in a selected state, and the selection controls displayed in the text content 4122 and 4123 are in an unselected state, which can represent that the selection control is currently selected. Select text content 4121.
  • the electronic device 100 can also display the user interface 1730 shown in (B) of FIG. 17B.
  • the user interface 1730 is similar to the user interface 1720 , except that the user interface 1730 is used to implement the editing function of the picture card 421 .
  • the selection control 1731 displayed in the picture content 4211 is in a selected state, and the selection control displayed in the picture content 4212 is in an unselected state, which can represent that the picture content 4211 is currently selected.
  • the electronic device 100 may receive a touch operation on the picture content 4211 in the user interface 1730.
  • the touch operation is dragging left and right or dragging up and down.
  • FIG. 17B the touch operation is dragging the picture content 4211 toward the right edge of the screen.
  • the user operations are illustrated as an example.
  • the electronic device 100 moves the selected text content 4121 and picture content 4211 to the card 1410 in the user interface 1400 shown in (B) of FIG. 14 for display.
  • the card 1410 in the user interface 1740 includes the above-selected text content 4121 and picture content 4211, and also includes an add control 1410A.
  • the add control 1410A is used to add content to the card 1410 in the manner shown in Figure 15 or Figure 16, for example.
  • the electronic device 100 when the electronic device 100 displays the semantic convergence card set (for example, displays the user interface 410 shown in (A) of FIG. 4 ), it may receive a touch operation (for example, a click operation) on the save control 414 , in response to This touch operation saves the semantic convergence card set.
  • a touch operation for example, a click operation
  • the electronic device 100 may display the user interface 410 shown in (A) of FIG. 4 .
  • the electronic device 100 may receive a touch operation (for example, a click operation) on the save control 414 in the user interface 410, and in response to the touch operation, display the user interface 1800 shown in (B) of FIG. 18 .
  • the user interface 1800 is used to select the display position of the entrance for secondary loading of the semantic convergence card set.
  • the prompt box 1810 in the user interface 1800 includes a title ("display position") and multiple display position options.
  • the multiple display position options include, for example, Including the favorites option 1811 in the browser, the desktop option 1812, the negative one screen option 1813 and the gallery option 1814.
  • the selection control displayed in any one of the options is used to select to save the semantic convergence card set to the display corresponding to the option. location or cancel this selection.
  • the selection control 1813A and the selection control 1814A displayed in the option 1814 are both in a selected state, which can represent the current selection of saving the semantic convergence card set to the favorites of the browser application, the desktop, the negative screen application and the gallery application. Understandably, more or fewer selection controls may be selected.
  • the user interface 1800 also includes a OK control 1815 and a Cancel control 1816 for canceling saving the semantic convergence card set.
  • the electronic device 100 may receive a touch operation (such as a click operation) on the determination control 1815, and in response to the touch operation, save the semantic convergence card set to the above-mentioned selected display position.
  • the user can view the semantic convergence card set again through the favorites of the browser application. For a specific example, see Figure 19A.
  • the electronic device 100 can display a user interface 1910 of the browser.
  • the navigation bar at the bottom of the user interface 1910 can include a control 1911 , and the control 1911 is used to trigger opening of favorites.
  • the electronic device 100 may receive a touch operation (for example, a click operation) on the control 1911, and in response to the touch operation, display the user interface 1920 shown in (B) of FIG. 19A.
  • the user interface 1920 includes a title 1921 ("Favorites") and a favorites list 1922.
  • the favorites list 1922 includes, for example, an option 1922A for a favorited card set and options for a plurality of favorited web pages.
  • the characters included in the option 1922A are the titles of the favorited card sets.
  • the electronic device 100 may receive a touch operation (such as a click operation) for option 1922A, and in response to the touch operation, display the specific content of the collected semantic convergence card set, such as displaying the user interface 410 shown in (C) of FIG. 19A (i.e., User interface 410 shown in (A) of Figure 4).
  • a touch operation such as a click operation
  • display the specific content of the collected semantic convergence card set such as displaying the user interface 410 shown in (C) of FIG. 19A (i.e., User interface 410 shown in (A) of Figure 4).
  • the electronic device 100 may respond to a touch operation (eg, a click operation) on the control 1911 in the user interface 1910 shown in (A) of FIG. 19A .
  • This touch operation displays user interface 1930 shown in FIG. 19B.
  • the user interface 1930 includes multiple page options, such as page option 1931, page option 1932, and page option 1933.
  • Page option 1931 is used to trigger display of a list of favorite web pages
  • page option 1932 is used to trigger display of a list of favorite card sets.
  • page option 1933 is used to trigger the display of a list of other content in the collection.
  • the page option 1932 in the user interface 1930 is in a selected state, indicating that what is currently displayed is a list 1934 of collected card sets.
  • the list 1934 includes, for example, an option 1934A indicating a card set titled "Xi'an Tour Route” and an option 1934A indicating a card set titled “Wuhan Tour”. Options for the "Routes" card set.
  • the electronic device 100 may display the specific content of the corresponding card set in response to a touch operation (such as a click operation) on option 1934A, such as displaying the user interface 410 shown in (A) of FIG. 4 .
  • the user can view the semantic convergence card set again through the desktop. See Figure 20 for a specific example.
  • the electronic device 100 can display a user interface 2010.
  • the user interface 2010 is a desktop.
  • the control 2011 in the user interface 2010 is used to indicate a collection of collected cards.
  • the control 2011 includes a title 2011A, application information 2011B, a card control 2011C and a page turning control 2011D.
  • the characters included in the title 2011A are the title "Xi'an Tourist Route" of the collected card set.
  • the application information 2011B is used to indicate that the collected card set belongs to the browser application.
  • the card control 2011C is used to display text cards in the collection of cards (ie, text cards 412 in the user interface 410 shown in (A) of FIG. 4 ).
  • the page turning control 2011D is used to trigger switching of card types displayed in the card control 2011C.
  • the control 2011 can switch the card type displayed in the card control 2011C every preset time period. For example, after 5 seconds, the electronic device 100 can display the user interface 2020 shown in (B) of Figure 20 and the control 2011 shown in the user interface 2020.
  • the card control 2011C in is used to display the picture cards in the collection of cards (ie, the picture cards 421 in the user interface 420 shown in (B) of FIG. 4 ).
  • the electronic device 100 may receive a touch operation (such as a click operation) for the control 2011 in the user interface 2010 or the user interface 2020, and in response to the touch operation, display the specific content of the collected semantic convergence card set, for example, display (C of FIG. 20 ) (that is, the user interface 410 shown in (A) of FIG. 4 ).
  • the electronic device 100 may receive a touch operation for a control 2011 in the user interface 2010 or the user interface 2020, For example, the touch operation is sliding left and right or sliding up and down.
  • FIG. 20 takes the touch operation being sliding from right to left as an example.
  • the electronic device 100 displays other collected semantic convergence card sets, for example, displays the user interface 2030 shown in (D) of FIG. 20 .
  • the user interface 2030 is similar to the user interface 2010, except that the set of collected cards indicated by the control 2011 is different.
  • the characters included in the title 2031A in the control 2011 are the title "Wuhan Tourist Route" of the currently displayed card set, and the card control 2031B in the control 2011 is used to display text cards in the card set.
  • the electronic device 100 may receive a touch operation (eg, a click operation) for the card control 2011C (for displaying text cards) in the user interface 2010 shown in (A) of FIG. 20 , and in response to the touch operation, The text card is displayed, for example, the user interface 410 shown in (A) of FIG. 4 is displayed.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 2011C (for displaying picture cards) in the user interface 2020 shown in (B) of FIG. 20 , and respond to the touch operation , display the picture card, for example, display the user interface 420 shown in (B) of FIG. 4 .
  • the card displayed by the electronic device 100 in response to the user operation may be different.
  • the electronic device 100 may also respond to a touch operation (for example, a click operation) on a card control for displaying a video card, and in response to the touch operation, display the video card, for example, display as shown in (C) of FIG. 4 User interface 430.
  • the user can view the semantic convergence card set again through the negative screen application. See Figure 21 for a specific example.
  • the electronic device 100 may display a user interface 2110 of a one-screen application.
  • the user interface 2110 may include multiple functional controls (such as controls for functions such as scanning, ride codes, payment codes, and health codes), application usage cards, parking status cards, and controls 2111.
  • the control 2111 is similar to the control 2011 in the user interface 2010 shown in FIG. 20 (A), both are used to indicate a collection of collected cards, and are currently used to display text cards in the card set.
  • the control 2111 also includes a page turning control 2111A.
  • the electronic device 100 may receive a touch operation (eg, a click operation) for the page turning control 2111A, and in response to the touch operation, display the user interface 2120 shown in (B) of FIG. 21 .
  • a touch operation eg, a click operation
  • the control 2111 in the user interface 2120 is similar to the control 2011 in the desktop 2020 shown in FIG. 20(B). They are both used to indicate a collection of collected cards and are currently used to display picture cards in the card set.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the control 2111 in the user interface 2110 or the user interface 2120, and in response to the touch operation, display the specific content of the collected semantic convergence card set, for example, display (C of FIG. 21 ) (that is, the user interface 410 shown in (A) of FIG. 4 ).
  • the electronic device 100 may receive a touch operation for the control 2111 in the user interface 2110 or the user interface 2120, for example, the touch operation is sliding left and right or sliding up and down, and in response to the touch operation, other semantic collections of the collection are displayed.
  • the card set displays the control 2011 in the user interface 2030 shown in (D) of FIG. 20 in the user interface of the one-screen application.
  • the example shown in Figure 21 is similar to the example shown in Figure 20.
  • the card displayed by the electronic device 100 in response to the user operation may be different. For specific examples, see the example shown in Figure 20. No longer.
  • Figure 20 takes the example of switching the cards displayed by the card control according to time.
  • Figure 21 takes the example of switching the cards displayed by the card control according to the touch operation on the page turning control.
  • the electronic device 100 can also respond Regarding the sliding operation of the card control (such as sliding left and right or sliding up and down) to switch the cards displayed by the card control, this application does not limit the specific triggering method.
  • the electronic device 100 can also display the control 2010 shown in Figure 20 in the user interface of other applications (such as the homepage or favorites interface of a browser application), and/or The control 2111 shown in Figure 21 above (for example, can be understood as a widget indicating a collected card set).
  • the user can view the semantic convergence card set again through the gallery application. See Figure 22 for a specific example.
  • the electronic device 100 may display a user interface 2210 of the gallery application, and the user interface 2210 may include thumbnails of a plurality of pictures.
  • the electronic device 100 may receive a touch operation (for example, a click operation) on the thumbnail 2211 among the plurality of thumbnails, and in response to the touch operation, display the original picture corresponding to the thumbnail 2211, for example, as shown in (B) of FIG. 22 User interface 2220.
  • the picture 2221 in the user interface 2220 is used to indicate the collected card set.
  • the picture 2221 includes a title 2221A, an image 2221B and a QR code 2221C.
  • the characters included in the title 2221A are the title "Xi'an Travel Route" of the collected card set, and the image 2221B is the cover of the collected card set (taking the first picture included in the picture card in the card set, that is, the picture content 4211 included in the picture card 421 in the user interface 420 shown in (B) of Figure 4 as an example),
  • the QR code 2221C may include identification information of the collected card set to indicate the card set.
  • the function bar at the bottom of the user interface 2220 includes an image recognition control 2222.
  • the electronic device 100 can receive a touch operation (such as a click operation) for the image recognition control 2222, and in response to the touch operation, identify the picture 2221, such as identifying the elements in the picture 2221. QR code 2221C.
  • the electronic device 100 may display the recognition result, that is, the specific content of the semantic convergence card set indicated by the picture 2221, for example, display the user interface 410 shown in (C) of FIG. 22 (ie, the user interface 410 shown in (A) of FIG. 4). .
  • the user can also use the scan function of the electronic device to scan the above picture 2221, for example, identify the QR code 2221C in it, and display the specific details of the recognized semantic convergence card set.
  • the content for example, displays the user interface 410 shown in (A) of Figure 4 . This application does not limit the specific method of identifying the picture.
  • the entry for secondary loading of the semantic convergence card set is not limited to the above example.
  • the entry for secondary loading of the semantic convergence card set can also be displayed in other applications such as Notepad. This application does not limit this.
  • the electronic device 100 can receive a touch operation (such as a click operation) for any web card in the user interface 320 shown in (B) of Figure 3, In response to the touch operation, the specific content of the web page indicated by the web page card is displayed. See Figure 23 for a specific example.
  • a touch operation such as a click operation
  • the electronic device 100 can display the user interface 320 shown in (B) of FIG. 3 .
  • the web page list in the user interface 320 includes, for example, a web page card 322 (indicating web page 1), a web page card 323 ( Indicate web page 2) and web page card 324 (indicate web page 3).
  • the electronic device 100 may receive a touch operation (such as a click operation) for the web page card 322, and in response to the touch operation, display the web page 1 indicated by the web page card 322 (the title is "Xi'an Tourist Route Map", the URL is "Website 111" and the source For the specific content of "source aaa”), please refer to the user interface 2300 shown in (B) of Figure 23 for details.
  • the user interface 2300 is similar to the user interface 600 shown in (B) of FIG. 6 , except that the text content 4121 in the user interface 600 is not highlighted in the user interface 2300 .
  • the card control 325 in the user interface 320 and the card control 2310 in the user interface 2300 are both used to trigger the display of the semantic convergence card set, but the semantic convergence card sets that trigger the display can be different, wherein the card control 325 triggers the display of the semantic convergence card set.
  • the content included is extracted from at least one web page obtained by searching.
  • the content included in the semantic convergence card set triggered by the card control 2310 is extracted from the web page 1 selected by the user to view.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the card control 2310 in the user interface 2300 shown in (B) of FIG. 23, and respond Upon this touch operation, a semantic convergence card set is displayed. For details, see the user interface shown in Figure 24.
  • a touch operation such as a click operation
  • Figure 24 is similar to Figure 4. They are both used to display the semantic convergence card set. The difference is that the semantic convergence card set shown in Figure 4 includes the top three web pages 1, 2 and 3 obtained from the search. Extracted from , the content included in the semantic convergence card set shown in Figure 24 is extracted from the web page 1 that the user chooses to view.
  • FIG. 24 When introducing FIG. 24 below, the differences between FIG. 24 and FIG. 4 will be mainly introduced. For other explanations, please refer to the description of FIG. 4 .
  • the electronic device 100 may display the user interface 2410 of the browser.
  • the user interface 2410 may include the title 2411 of the web page 1 that the user chooses to view ("Xi'an Tourism Road Map"), the website information and the source information 2412 ("the website 111 is from aaa”), and the user interface 2410 is used to display the text in the semantic convergence card set Card 2413, text card 2413 includes multiple text contents related to search keywords in web page 1, such as text content 2413A ("There are popular tourist routes"), text content 2413B ("You must go to Qin Shihuang's Mausoleum when going to Xi'an...
  • the electronic device 100 may receive a sliding operation for the text card 2413.
  • the sliding operation is sliding up and down or sliding left and right.
  • FIG. 24 takes the sliding operation as sliding from right to left as an example.
  • the electronic device 100 displays the user interface 2420 shown in (B) of FIG. 24 .
  • the user interface 2420 is similar to the user interface 2410 shown in (A) of Figure 24 . The difference is that the user interface 2420 is used to display picture cards 2421 in the semantic convergence card set.
  • the picture cards 2421 include pictures related to the search keywords in web page 1. content.
  • the electronic device 100 may receive a sliding operation for the picture card 2421.
  • the sliding operation is sliding up and down or sliding left and right.
  • FIG. 24 takes the sliding operation as sliding from right to left as an example.
  • the electronic device 100 displays the user interface 2430 shown in (C) of FIG. 24 .
  • the user interface 2430 is similar to the user interface 2410 shown in (A) of Figure 24 . The difference is that the user interface 2430 is used to display video cards 2431 in the semantic convergence card set.
  • the video cards 2431 include videos related to the search keywords in web page 1. content.
  • the electronic device 100 when the electronic device 100 displays the specific content of the web page 1 (for example, displays the user interface 2300 shown in (B) of FIG. 23 ), it can receive a touch operation (for example, a long Press operation), in response to the touch operation, the content is selected.
  • a touch operation for example, a long Press operation
  • the electronic device 100 may display the user interface 2300 shown in (B) of FIG. 23 .
  • the user interface 2300 may include text 2320 ("Be sure to go to Qin Shihuang's Mausoleum when going to Xi'an... The ticket to Qin Shihuang's Mausoleum is 120 yuan... The fare to Qin Shihuang's Mausoleum is 20 yuan."), in which the text "Qin Shihuang's Mausoleum" is Selected, the above touch operation can be understood as a user operation for selecting the text "Mausoleum of Qin Shihuang" in web page 1.
  • a function list 2330 may be displayed near the Chinese text "Mausoleum of Qin Shihuang" in the user interface 2300 .
  • the function list 2330 may include options for multiple functions of the selected text "Mausoleum of Qin Shihuang", for example, options including a copy function, a search function, a save option 2330A, and an option to view more functions.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the save option 2330A, and in response to the touch operation, add the above-mentioned selected text "Mausoleum of Qin Shihuang" and related text to a new card in the semantic convergence card set.
  • the user interface 2510 shown in (B) of FIG. 25A may be displayed.
  • the user interface 2510 may include title information 2511.
  • the title information 2511 includes the title "Xi'an Travel Route Map", the website information "Website 111" and the source information "Source aaa” of the web page 1 currently viewed by the user. ”, which may indicate that the semantic convergence card set displayed by the user interface 2510 is generated based on the content in web page 1.
  • User interface 2510 is used to display cards 2512 in a semantic convergence card set.
  • the page option 2513 in the user interface 2510 is used to indicate that the semantic convergence card set includes four cards, and the currently displayed card 2512 is the fourth card among the four cards, wherein the first three cards among the four cards For example, they are text card 2413, picture card 2421 and video card 2431 shown in Figure 24.
  • Card 2512 may include a title ("Customized Card 2") and multiple text contents.
  • the multiple text contents include, for example, the above-mentioned selected text 2512A ("Mausoleum of Qin Shi Huang") and associated content 2512B ("Ticket to Qin Shi Huang's Mausoleum 120 yuan") and associated content 2512C ("The fare to Qin Shihuang's Mausoleum is 20 yuan").
  • associated content 2512B and associated content 2512C are content semantically related to text 2512A in web page 1.
  • the type of associated content is text as an example. , in specific implementation, it can also be other types such as pictures and videos. This application does not limit the specific type of associated content.
  • Card 2512 also displays multiple deletion options corresponding to multiple text contents. For example, deletion option 2512D is used to delete corresponding text 2512A.
  • the delete option 2512E is used to delete the corresponding associated content 2512B, and the delete option 2512F is used to delete the corresponding associated content 2512C.
  • the user interface 2510 also includes a confirmation control 2514 and a cancellation control 2515.
  • the confirmation control 2514 is used to save the currently displayed card 2512 into the semantic convergence card set, and the cancel control 2515 is used to cancel saving the currently displayed card 2512.
  • the electronic device 100 may receive a touch operation (eg, a click operation) for the delete option 2512D, in response to the touch operation, delete the text 2512A in the card 2512, and then the electronic device 100 may receive a touch operation (eg, a click operation) for the determination control 2514 ), in response to the touch operation, the card 2512 is saved, and the user interface 2520 shown in (C) of FIG. 25A can be displayed.
  • a touch operation eg, a click operation
  • a touch operation eg, a click operation
  • the card 2512 in the user interface 2520 includes text content 2512G, and the text content 2512G includes the characters "ticket to Qin Shihuang's Mausoleum 120 yuan, fare to Qin Shihuang's Mausoleum 20 yuan", that is, including the characters displayed in the above-mentioned associated content 2512B that has not been deleted and The characters displayed in the associated content 2512C do not include the characters displayed in the deleted text content 2512A.
  • the user interface 2520 shown in (C) of Figure 25A can also be replaced with the user interface 2530 shown in Figure 25B.
  • the card 2512 in the user interface 2530 includes the above-mentioned undeleted card.
  • the associated content 2512B and associated content 2512C do not include the deleted text content 2512A.
  • the electronic device 100 may display the user interface 2540 shown in FIG. 25C in response to a touch operation (eg, a click operation) acting on the associated content 2512B in the user interface 2530 shown in FIG. 25B.
  • the user interface 2540 is used to display the web page 1 to which the associated content 2512B belongs (the website address is "website address 111" and the source is "source aaa”), including the associated content 2512B ("Tickets for Qin Shihuang's Mausoleum 120 Yuan") Text 2320 ("Be sure to go to Qin Shihuang's Mausoleum when going to Xi'an... The ticket to Qin Shihuang's Mausoleum is 120 yuan... The fare to Qin Shihuang's Mausoleum is 20 yuan.") is displayed on the user interface 2540, and the associated content 2512B is displayed on the user interface Highlighting (e.g. highlighting) in interface 2540.
  • the electronic device 100 can receive a touch operation (such as a sliding operation) for any content in the web page 1, and in response to the touch operation, select the content, specifically An example can be seen in Figure 26.
  • a touch operation such as a sliding operation
  • the electronic device 100 may display the user interface 2300 shown in (B) of FIG. 23 .
  • the user interface 2300 may include a picture 2340A, a title 2340B of the picture 2340A ("Popular tourist attractions in Xi'an"), and a description 2340C of the picture 2340A ("The above picture mainly shows.").
  • the electronic device 100 can receive a sliding operation that acts on the picture 2340A.
  • the sliding operation is a single-finger sliding, a two-finger sliding, or a knuckle sliding.
  • Figure 26 takes the sliding operation as a sliding operation in which the knuckles draw a circle around the picture 2340A as an example.
  • the sliding operation can be understood as a screenshot operation for picture 2340A, and the sliding operation can also be understood as a user operation for selecting picture 2340A in web page 1.
  • the electronic device may display the user interface 2610 shown in (B) of FIG. 26 in response to the sliding operation.
  • the user interface 2610 may include a save control 2611, an editing interface 2612, and a function bar 2613 at the bottom, where the editing interface 2612 is used to display the user interface for user operations, that is, the user interface 2300 shown in (A) of Figure 26 .
  • An editing box 2612A is displayed in the editing interface 2612, and a picture 2340A selected by the user is displayed in the editing box 2612A.
  • the user can operate the shape, size and position of the edit box 2612A to change the selected content (ie, the content displayed in the edit box 2612A).
  • the function bar 2613 includes, for example, options for sharing functions, graphic options 2613A, rectangular options 2613B, save options 2613C, and options for viewing more functions.
  • the graphic option 2613A is used to set the shape of the edit box 2612A to a free graphic. Setting this function
  • the user can operate the edit box 2612A to change the edit box 2612A into any regular or irregular shape.
  • Rectangle option 2613B is used to set the shape of the edit box 2612A to a rectangle.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the save control 2611 or the save option 2613C, and in response to the touch operation, save the picture 2340A and related content displayed in the edit box 2612A to (C) of FIG. 25A
  • a touch operation such as a click operation
  • the user interface 2620 shown in (C) of FIG. 26 is displayed.
  • Card 2512 in user interface 2620 includes: previously saved text content 2512G, and also includes a content column to be selected.
  • Table 2512H The content list 2512H includes multiple contents and multiple deletion options.
  • the multiple contents include, for example, the picture 2340A selected by the user, the title 2340B of the picture 2340A in the web page 1, and the description 2340C of the picture 2340A in the web page 1.
  • Each deletion option corresponds to a content and is used to trigger deletion of the content.
  • Figure 25A takes the example of saving the content selected by the user to a newly created custom card in the semantic convergence card set.
  • Figure 26 takes the example of saving the content selected by the user to an existing custom card in the semantic convergence card set.
  • the card to save the selected content can also be determined based on the type of content selected by the user. For example, when the content selected by the user is text, the selected content and associated content are saved to the text card. This application does not limit the card used to save selected content.
  • the user can also choose a card to save the selected content and associated content.
  • the electronic device 100 may receive a touch operation (such as a drag operation) for any content in the web page 1, and in response to the touch operation, display a selection interface for the display position of the content. , see Figure 27A for specific examples.
  • the electronic device 100 may display the user interface 2300 shown in (B) of FIG. 23 .
  • the user interface 2300 includes a card control 2310 and a picture 2340A.
  • the electronic device 100 can receive a drag operation acting on the picture 2340A.
  • the drag operation is a user operation of dragging to a specific position.
  • the drag operation is a drag operation of dragging the picture 2340A to the position of the card control 2310.
  • the user operation is illustrated as an example, and the drag operation can be understood as a user operation for selecting picture 2340A in web page 1.
  • the electronic device 100 displays the user interface 2700 shown in (B) of FIG. 27A .
  • the user interface 2700 is used to select a card to save the selected content and associated content.
  • the prompt box 2710 in the user interface 2700 includes a title ("Save to Card") and multiple card options.
  • the multiple card options include, for example, text card options. 2711.
  • Customized Card 2 is the one shown in Figure 25A Card 2512 in user interface 2520 shown in (C).
  • the selection control shown in any of the above options is used to select whether to save the selection to the card corresponding to that option or to cancel the selection.
  • the selection controls displayed in options 2711, 2712, 2713 and 2715 are all in an unselected state, and the selection control 2714A displayed in option 2714 is in a selected state, which can represent that the selected content is currently saved to option 2714.
  • the prompt box 2710 also includes a confirm control 2716 and a cancel control 2717.
  • the confirm control 2716 is used to save the selected content and related content to the above-mentioned selected card
  • the cancel control 2717 is used to cancel saving the selected content and related content to the semantic convergence card set.
  • the electronic device 100 may receive a touch operation (such as a click operation) for the determination control 2716, and in response to the touch operation, save the picture 2340A and related content to the custom card 2 indicated by the option 2714, and at this time, for example, ( of FIG. 26 is displayed User interface 2620 shown in C).
  • a touch operation such as a click operation
  • the electronic device 100 may also display a display in response to a user operation of dragging the picture 2340A in the user interface 2300 shown in (A) of FIG. 27A to the location of the card control 2310.
  • the user interface 2610 shown in (B) of FIG. 26 , or the user interface 2620 shown in (C) of FIG. 26 is directly displayed.
  • the electronic device 100 may also display the save option 2330A in the user interface 2300 shown in FIG. 25A (A), such as a click operation, by displaying the User Interface 2700.
  • the electronic device 100 may also display the user interface shown in (B) of FIG.
  • the electronic device 100 may also display (B of FIG. 27A ) in response to a touch operation (such as a click operation) on the save control 2611 or the save option 2613C in the user interface 2610 shown in (B) of FIG. 26 ) as shown in the user interface 2700.
  • a touch operation such as a click operation
  • the electronic device 100 may also display the image in response to other forms of screenshot operations (such as voice input) for the picture 2340A in the user interface 2300 shown in (A) of FIG. 26 .
  • this application does not limit the form of user operations.
  • Figure 25A, Figure 26 and Figure 27A show three user operations for selecting/selecting content in a web page.
  • the user operations shown in Figure 26 or Figure 27A can also be used to select content in web page 1.
  • Text content in other examples, the content in the web page can also be selected through voice input. This application does not limit the specific user operation used to select the content in the web page.
  • FIG. 25A takes the example of generating four cards after the electronic device 100 receives a user operation for selecting content in a web page.
  • the four cards are the text card 2413, the picture card 2421, and the video card shown in FIG. 24. 2431 and the custom card 2512 shown in (B) of Figure 25A.
  • the electronic device 100 may only generate the customized card 2512 shown in (B) of Figure 25A without generating the customized card 2512 shown in Figure 24 Text card 2413, picture card 2421, video card 2431, this application does not limit this.
  • the electronic device 100 may also generate multiple cards. These multiple cards are used to display the selected content and the selected content in the web page. Content related to the above selected content (referred to as associated content). Optionally, these multiple cards are used to display different types of web page content. See Figure 27B for specific examples.
  • FIG. 27B illustrates the above-mentioned selected content and the above-mentioned associated content as the content included in the card 2512 in the user interface 2620 shown in (C) of FIG. 26 as an example.
  • the above-mentioned selected content and the above-mentioned associated content are specifically: the text in the card 2512 Content 2512G (including text content 2512B ("ticket to Qin Shihuang's Mausoleum is 120 yuan") and text content 2512C ("fare to Qin Shihuang's Mausoleum is 20 yuan”)), title 2340B, picture 2340A and description 2340C.
  • the electronic device 100 can generate the text card 2811 in the user interface 2810 shown in (A) of FIG.
  • the text card 2811 is used to display text content: text content 2512B, text content 2512C, title 2340B, and description 2340C.
  • the picture card 2821 is used to display picture content: picture 2340A.
  • the search method involved in this application is introduced. This method can be applied to the search system 10 shown in Figure 1A. This method can be applied to the search system 10 shown in Figure 1B.
  • Figure 28 is a schematic flowchart of a search method provided by an embodiment of the present application.
  • the method may include but is not limited to the following steps:
  • S101 The electronic device obtains the first search word input by the user.
  • the electronic device may receive a first search term input by a user (which may also be referred to as a first keyword or a search keyword), and receive a user operation for triggering a search function for the first keyword,
  • the electronic device may obtain the first keyword input by the user based on the user operation.
  • the electronic device 100 may receive the keyword "Xi'an travel route" input for the search bar 311 in the user interface 310 shown in (A) of FIG. 3 , and then receive a touch operation for the search control 312 in the user interface 310 ( For example, click operation), the electronic device can obtain the keyword "Xi'an travel route" input by the user based on the touch operation.
  • This application does not limit the form of the first keyword, such as but not limited to text, picture, audio, etc.
  • S102 The electronic device sends a first search request to the network device.
  • the electronic device may send a third keyword to the network device.
  • a search request is used to request to obtain search results related to the first keyword.
  • the first search request includes, but is not limited to, the above-mentioned first keyword.
  • S103 The network device obtains the first web page collection.
  • the network device may obtain at least one web page related to the first keyword, that is, the first set of web pages. For example, the network device may perform semantic analysis on the first keyword and obtain at least one web page related to the first keyword (ie, the first set of web pages) from the web page database.
  • S104 The network device sends the first set of web pages to the electronic device.
  • S105 The electronic device displays the first web page collection.
  • the first set of web pages described in S103-S105 includes web pages indicated by multiple web page cards in the user interface 320 shown in (B) of Figure 3: web page 1 (titled “Xi'an”) indicated by the web page card 322. "Tourist Route Map"), webpage 2 indicated by webpage card 323 (titled “Xi'an Travel Route - Guide”) and webpage 3 indicated by webpage card 324 (titled “Xi'an Travel Guide”), wherein the first webpage collection and The first keyword is related, and the first keyword is "Xi'an travel route" displayed in the search bar 311 included in the user interface 310 shown in (A) of FIG. 3 .
  • the first set of web pages displayed by the electronic device is sorted.
  • the web pages in the first set of web pages that are more relevant to the first keyword are given higher priority in display position.
  • the first keyword is "Xi'an travel route" displayed in the search bar 311 included in the user interface 310 shown in Figure 3 (A)
  • the user interface 320 shown in Figure 3 (B) shows the first The summary information of the top three web pages in the web page collection (can be called web page cards). Since the correlation between these three web pages and the first keyword is from high to low: web page 1 (titled “Xi'an Technology Route” Figure"), webpage 2 (titled “Xi'an Travel Route - Strategy”) and webpage 3 (titled "Xi'an Travel Guide”). Therefore, in the user interface 320, in order from top to bottom, they are: Instruction webpage A web card 322 indicating web page 1, a web page card 323 indicating web page 2, and a web page card 324 indicating web page 3.
  • S106 The electronic device receives the first user operation.
  • the form of the first user operation may be, but is not limited to, a touch operation (such as a click operation), voice input, motion gesture (such as a gesture), brain waves, etc., for example, the first user operation is as shown in FIG. 3
  • the electronic device obtains the first content set from the web page contents included in the first web page set.
  • the electronic device may respond to the first user operation and filter out webpage content that meets the user's search intention (ie, the first content collection) from the webpage content included in the first webpage collection.
  • the content included in the first content set is related to the first keyword.
  • the content included in the first content set is content that has a strong correlation with the first keyword among the web content included in the first web page set.
  • the electronic device can extract the contents of the top N web pages (which may be called the second content set) from the first set of web pages, where N is a positive integer.
  • the electronic device can silently load the N web pages in the background, obtain the hypertext markup language (html) code of the N web pages through JavaScript (JS) script injection, and then parse the N web pages.
  • html code to obtain various types of content such as text, pictures, videos, audios, documents, etc. in these N web pages.
  • These obtained contents can constitute a second content collection.
  • the electronic device obtains non-text webpage content such as pictures, videos, audios, and documents, it can also obtain text related to these webpage contents, such as names, titles, comments, etc.
  • the electronic device when the electronic device cannot obtain the original files of the webpage content that needs to be downloaded, such as videos, audios, documents, etc. from the webpage, it can obtain the address information for viewing the original files (such as the hyperlink of the original file, Hyperlinks to the webpage to which the webpage content belongs, etc.), and/or identification information used to indicate webpage content (such as title, name, cover image, thumbnail, etc.).
  • the address information for viewing the original files such as the hyperlink of the original file, Hyperlinks to the webpage to which the webpage content belongs, etc.
  • identification information used to indicate webpage content such as title, name, cover image, thumbnail, etc.
  • the second content collection may include a text collection, a picture collection, and a video collection, wherein the text collection includes q pieces of text content: text 1, text 2, ..., text q, and the picture collection includes m picture content: picture 1, picture 2, ..., picture m, the video collection includes n video contents: video 1, video 2, ..., video n, q, m and n are positive integers greater than 1.
  • the electronic device may calculate the similarity between each content in the second content set and the first keyword.
  • calculating the similarity may include two steps:
  • the electronic device can convert the first keyword into an M-dimensional first feature vector, and convert each content in the second content collection into an M-dimensional second feature vector, where M is a positive integer.
  • the electronic device can first convert the first keyword into text, and then convert the text into the first feature vector.
  • the electronic device can directly convert the text content into the second feature vector.
  • the non-text content such as pictures, videos, audios and documents in the second content collection
  • the electronic device may first convert the non-text content into text, and then convert the text into the second feature vector.
  • the text obtained by converting the non-text content includes, for example, but is not limited to: names, titles, comments and other texts related to the non-text content, as well as text obtained by parsing the non-text content.
  • the electronic device can implement vector conversion operations based on the vector model.
  • Figure 29 takes M as 128 as an example for illustration, and any one-dimensional vector is represented by g i (i is a positive integer less than or equal to 128).
  • the first feature vector (also called a keyword vector) obtained by the first keyword input vector model can be characterized as: (g 1 , g 2 ,..., g 128 ) p .
  • the second feature vectors (also called text vectors) obtained by the input vector model of text 1, text 2, ..., and text q included in the text set can be respectively characterized as:
  • the second feature vector also called a picture vector
  • picture m included in the picture set can be characterized as:
  • second feature vectors (also called video vectors) obtained by the input vector model of video 1, video 2, ..., and video n included in the video collection can be respectively characterized as:
  • the above text vectors, picture vectors and video vectors may constitute a second feature vector set.
  • the electronic device can calculate the similarity between each second feature vector and the first feature vector.
  • the electronic device can obtain the similarity between the second feature vector and the first feature vector (g 1 , g 2 ,..., g 128 ) p through dot product calculation.
  • Text vector converted from the above text 1 Taking an example to illustrate the calculation method of similarity, the similarity between the text vector and the first feature vector Other similarities are similar.
  • the similarity (which can be called text similarity) corresponding to text 1, text 2, ..., and text q included in the text set can be characterized as
  • the similarity corresponding to picture 1, picture 2,..., and picture m included in the picture set (which can be called picture similarity) can be characterized as
  • the similarity (which can be called video similarity) corresponding to video 1, video 2,..., and video n included in the video set can be characterized as
  • the above text similarity, picture similarity and video similarity can constitute a similarity set.
  • the electronic device can filter out the webpage content corresponding to the second feature vector whose similarity to the first feature vector is greater than or equal to the preset threshold, and the filtered content can constitute the first content collection.
  • the electronic device can filter out similarities that are greater than or equal to a preset threshold from the similarity set, and web content corresponding to these similarities can constitute the first content set.
  • the text similarities corresponding to Text 1, Text 3 and Text 4 are greater than or equal to the preset threshold. Therefore, Text 1, Text 3 and Text 4 can constitute a related text set.
  • the picture similarity set the picture similarity corresponding to picture 1 and picture 2 is greater than or equal to the preset threshold. Therefore, picture 1 and picture 2 can constitute a related picture set.
  • the video similarity corresponding to video 1 and video 3 is greater than or equal to the preset threshold. Therefore, video 1 and video 3 can form a similar set.
  • the above related text collection, related picture collection and related video collection may constitute the first content collection.
  • a scenario example based on the above-described method of obtaining the first content collection Assume that the first keyword is the keyword "Xi'an travel route" displayed in the search bar 311 in the user interface 310 shown in (A) of Figure 3 .
  • the text set in the second content set can include text 1 ("Popular tourist routes are"), text 2 ("Xi'an, called Chang'an in ancient times"), text 3 (" Attraction Route 1: Shaanxi History Museum—Bell and Drum Towers --) and Text 4 ("Travel Route: Xicang—Muslim Street?”).
  • the picture collection may include picture 1 (pictures of scenic spots in the travel route), picture 2 (pictures of scenic spots in the travel route), and picture 3 (pictures of special food).
  • the video collection may include Video 1 (travel guide video, titled “Xi'an Travel Guide Video"), Video 2 (tourist interview video, titled “Tourists like Xi'an”) and Video 3 (travel guide video, titled "Xi'an 3 Day trip guide”).
  • the first content set includes all the items in (A) of Figure 4 Text content 4121 (used to display text 1), text content 4122 (used to display text 4), and text content 4123 (used to display text 3) in the user interface 410 shown in Figure 4(B).
  • the method of obtaining the first content collection is not limited to the above example.
  • the electronic device may also calculate the similarity between each content in the second content collection and the first keyword. After that, compare the similarity of web content of the same type belonging to the same web page, and determine the web content with the top M rankings of similarities (M is a positive integer). These determined web content can constitute the first content. Collection, optionally, different types correspond to different M. For example, assume that the second content set includes text 1, text 2, picture 1, picture 2, and picture 3 in web page 1. The similarity between text 1 and the first keyword is higher than the similarity between text 2 and the first keyword. Picture 1-Picture 3, in order from high to low in terms of similarity to the first keyword, are: Picture 3, Picture 1, Picture 2.
  • the method may also be to compare the similarity of web content of the same type belonging to the same web page, and determine the web content whose similarity is greater than or equal to a preset threshold. These determined web content may constitute the first content set. , this application does not limit this.
  • the method further includes: the electronic device receives a user operation, and in response to the user operation, selects a second set of web pages from the first set of web pages.
  • the electronic device can select the web card 511 and the web card 513 from the web card 511 , the web card 512 and the web card 513 in the user interface 510 shown in (A) of FIG. 5A , Therefore, the second web page set is web page 1 indicated by web page card 511 (titled “Xi’an Travel Route Map”) and web page 3 indicated by web page card 513 (titled “Xi’an Travel Guide”).
  • the method of obtaining the first content set is similar to the method of obtaining the first content set described above.
  • the difference is that the top N web pages in the first web page set need to be replaced with the second web page set. That is, the web page content included in the second content set is the content of the web pages in the second web page set. That is to say, the first content set is obtained according to the second web page set.
  • S108 Generate a first card set according to the first content set.
  • S109 The electronic device displays the first card set.
  • the first card set may include at least one card.
  • any card included in the first card set may include one type of web page content, that is, different cards are used to display different types of web page content, such as text, pictures, videos, and audio. Or documents and other web page contents are displayed on different cards respectively.
  • the division types can be more detailed. For example, static pictures and dynamic pictures are displayed on different cards.
  • the division type can be more coarse, for example, files such as video, audio, and documents are displayed on the same card.
  • the first content set is as exemplified in S107: including text 1 ("Popular tourist routes include!”), text 3 ("Attraction route one: Shaanxi History Museum - Bell and Drum Towers --), text 4 ("Tourist route: Xicang-Muslim Street"), Picture 1 (pictures of the scenic spots on the tourist route), Picture 2 (pictures of the scenic spots on the tourist route), Video 1 (travel strategy video, titled “Xi'an Travel Guide Video”) and Video 3 (Travel Guide Video, titled “Xi'an 3-Day Travel Guide”).
  • the first card set described in S108-S109 may include three cards, and the three cards include text content, picture content and video content respectively.
  • cards including text content please refer to the text card 412 in the user interface 410 shown in FIG. 4(A).
  • For cards including picture content please refer to the picture card 421 in the user interface 420 shown in FIG. 4(B).
  • For a card including video content see the video card 431 in the user interface 430 shown in (C) of FIG. 4 .
  • any card included in the first card set may include the content of one web page, that is, the content of different web pages is displayed on different cards.
  • the first content set is as exemplified in S107: including text 1 (belonging to web page 1), text 3 (belonging to web page 2), text 4 (belonging to web page 3), picture 1 (belonging to web page 1), picture 2 (belonging to web page 1), Web page 2), Video 1 (belongs to Web page 2) and Video 3 (belongs to Web page 1).
  • the first card set described in S108-S109 may include three cards, and the three cards include the contents of webpage 1, webpage 2 and webpage 3 respectively.
  • the card including the content of web page 1 specifically includes text content 4121 (used to display text 1) in the user interface 410 shown in FIG. 4(A), and picture content in the user interface 420 shown in FIG. 4(B). 4211 (for displaying picture 1), and video content 4312 (for displaying video 3) in the user interface 430 shown in (C) of FIG. 4 .
  • the card including the content of web page 2 specifically includes text content 4123 in the user interface 410 (used to display text 3), pictures 4212 in the user interface 420 (used to display picture 2), and video content 4311 in the user interface 430 (used in To display video 1).
  • the card including the content of web page 3 specifically includes text content 4122 in user interface 410 (used to display text 4). This application does not limit the classification method of cards.
  • the electronic device when the electronic device displays non-text web page content such as pictures, videos, audios, and documents in the first card set, it can also display text related to these web page content, such as names, titles, comments, etc.
  • the electronic device can display identification information of these web content in the first card set to indicate the web content.
  • the identification information is such as but Not limited to titles, names, cover images, thumbnails, etc.
  • the first card set described in S108-S109 includes the video card 431 in the user interface 430 shown in (C) of Figure 4, and the video content 4311 and the video content 4312 in the video card 431 are displayed in the form of a cover image.
  • the electronic device can determine based on the correlation between the web page content in the card and the first keyword (for example, characterized by the similarity in S107).
  • the display order of web page content for example, the stronger the relevance, the higher the priority of the display order of web page content in the card.
  • the first card set described in S108-S109 includes the text card 412 in the user interface 410 shown in (A) of FIG. 4 .
  • the multiple text contents displayed in the text card 412 are arranged from top to bottom as follows: text content 4121 ("The popular tourist routes are --) , text content 4122 ("Travel route: Xicang-Muslim Street?”) and text content 4123 ("Attraction route one: Shaanxi History Museum-Bell and Drum Tower.”).
  • the electronic device can determine the display order of the webpage content according to the display order of the webpage to which the webpage content in the card belongs, optionally Specifically, the higher the display order of the web page to which the web page content belongs, the higher the priority of the display order of the web page content in the card.
  • the display order of the web page is, for example, determined based on the correlation between the web page and the first keyword. The stronger the correlation. , the higher the display order of web pages in the web page list, the higher the priority.
  • the first card set described in S108-S109 includes the picture card 421 included in the user interface 420 shown in (B) of FIG. 4 .
  • the first card set described in S108-S109 includes the user interface 520 shown in (B) of Figure 5A Text card 521.
  • the web page list included in the user interface 320 shown in (B) of FIG. 3 in order from top to bottom, there are a web page card 322 indicating web page 1, a web page card 323 indicating web page 2, and a web page card 324 indicating web page 3. , therefore, in the text card 521, text content 4121 (used to display text 1) is displayed above text content 4122 (used to display text 4).
  • the electronic device may display the order of the webpage to which the webpage content in the card belongs, and the correlation between the webpage content and the first keyword (for example, (represented by the similarity in S107) together determine the display order of the contents of the multiple web pages.
  • the correlation between the webpage content and the first keyword for example, (represented by the similarity in S107) together determine the display order of the contents of the multiple web pages.
  • the higher the priority of the display order of the webpage to which it belongs the higher the priority of the display order of the webpage content in the card
  • the correlation with the first keyword The stronger the similarity, the higher the priority of the display order of the web page content in the card.
  • the first content set includes: content 1 in web page 1 (similarity to the first keyword is 0.7), content 2 (similarity to the first keyword is 0.8), content 3 in web page 2 ( The similarity to the first keyword is 0.5), and the content 4 in web page 3 (the similarity to the first keyword is 0.9).
  • These contents are all displayed on card 1 in the semantic convergence card set.
  • the display order of web pages from top to bottom is: web page 1, web page 2 and web page 3, and since the similarity corresponding to content 2 in web page 1 is greater than the similarity corresponding to content 1, in card 1, in accordance with the order from The display order from top to bottom is: content 2, content 1, content 3, content 4.
  • the method before S108, the method also includes: the electronic device receives a user operation, and in response to the user operation, determines the cards included in the first card set, such as but not limited to the number and type of cards, and the web pages included in the cards. Amount of content. For example, in the embodiment shown in FIG. 5B , the electronic device determines that the first card set includes text cards, picture cards and video cards, and the text cards include at most 3 text contents, the picture cards include at most 2 picture contents, and the video cards include at most 2 picture contents. 2 video content.
  • the method further includes: when the electronic device displays the first card set, it can receive a user operation for any content in the first card set, and in response to the user operation, display details of the web page to which the content belongs. content.
  • the electronic device can display webpage content related to the content.
  • the webpage content can be positioned and displayed in the webpage to which the webpage content belongs.
  • the electronic device displays the webpage content in the interface. Content can be highlighted (e.g. highlighted).
  • the electronic device may display (B) of FIG. 6 in response to a touch operation on the text content 4121 included in the text card 412 in the user interface 410 shown in (A) of FIG. 6 User interface 600 is shown.
  • the electronic device 100 may display the user interface 2540 shown in FIG. 25C in response to a touch operation on the associated content 2512B in the user interface 2530 shown in FIG. 25B .
  • the electronic device can jump to the content page of the webpage for display based on the recorded address information of the webpage (such as a uniform resource locator (uniform resource locator, url)).
  • the method further includes: when the electronic device displays the first card set, it may receive a user operation for any card in the first card set, and display other content in the card in response to the user operation.
  • the card 1321 in the user interface 1320 shown in FIG. 13(C) displays picture content 4211 and picture content 4212
  • the electronic device may receive a message for the card 1321 in the user interface 1320 shown in FIG. 13(C) sliding operation, in response to the sliding Perform an operation to display other content in the card 1321, for example, display the user interface 1320 shown in (D) of Figure 13.
  • the picture card 421 in the user interface 1320 shown in (D) of Figure 13 displays text content 4121, Text content 4122 and text content 4123.
  • S110 The electronic device receives the second user operation.
  • S111 Electronic equipment modification first card set.
  • the electronic device may modify the first card set in response to the second user operation and display the modified first card set.
  • the electronic device may delete any card in the first card set in response to a second user operation on the card in the first card set.
  • the electronic device may delete the text card 412 in response to the drag operation on the text card 412 in the user interface 410 shown in FIG. 7A , and at this time, the electronic device may display (B) of FIG. 7A User interface 700 is shown.
  • the electronic device may delete the text card 412 corresponding to the delete control 4124 in response to a click operation on the delete control 4124 in the user interface 410 shown in (A) of FIG. 7B . In this case, The user interface 700 shown in (B) of FIG. 7B is displayed.
  • the electronic device may adjust a display position of any card in the first card set in response to a second user operation on any card in the first card set. For example, when the electronic device displays the user interface 410 shown in FIG. 13 (A), it may receive a drag operation for the text card 412 in the user interface 410. For example, the drag operation is dragging left and right or dragging up and down. And when the user's finger leaves the screen, the electronic device also displays the picture card 421 in the user interface 1310 shown in FIG. 13(B). In response to the drag operation, the electronic device switches the display positions of the text card 412 and the picture card 421.
  • a drag operation for the text card 412 in the user interface 410 For example, the drag operation is dragging left and right or dragging up and down.
  • the electronic device also displays the picture card 421 in the user interface 1310 shown in FIG. 13(B). In response to the drag operation, the electronic device switches the display positions of the text card 412 and the picture card 421.
  • the electronic device may, in response to a second user operation on any card in the first card set, merge the card and other cards into one card.
  • the electronic device can receive a drag operation on the text card 412 in the user interface 410 shown in (A) of FIG. 13 , and when the user's finger leaves the screen, the electronic device also displays the image shown in FIG. 13
  • the picture card 421 in the user interface 1310 shown in (B) the electronic device responds to the drag operation and merges the text card 412 and the picture card 421.
  • the merged card can be seen in (C) and (D) of Figure 13 Card 1321 in user interface 1320 is shown.
  • the electronic device may display a user interface for editing any card in the first card set in response to a user operation on the card.
  • the electronic device may display the editing interface of the text card 412 in response to a long press operation on the text card 412 in the user interface 410 shown in (A) of FIG. 8 :(A) of FIG. 8 User interface 800 shown in B).
  • the electronic device may respond to a second user operation on any content in any card in the first card set, display an editing interface for the content, and the user may edit the content based on the editing interface. (such as adding and deleting).
  • the electronic device can receive characters input by the user and add them to the text content 4121 .
  • the electronic device may, in response to a second user operation on any content in any card in the first card set, delete the content in the first card set. For example, in the embodiment shown in FIG. 10 , the electronic device may delete the text content corresponding to the delete control 810 in the text card 810 in response to a click operation on the delete control 810 in the user interface 800 shown in (A) of FIG. 10 4121. At this time, the user interface 1000 shown in (B) of Figure 10 can be displayed.
  • the electronic device may respond to a second user operation on any content in any card in the first card set, adjusting a display position of the content in the card. For example, in the embodiment shown in FIG. 11 , the electronic device may switch the display positions of the text content 4121 and the text content 4122 in response to a drag operation on the text content 4122 in the user interface 800 shown in (A) of FIG. 11 .
  • the electronic device may respond to a second user operation for any content in any card in the first card set, adjust the display card of the content (ie, the card used to display the web content), It can also be understood as moving the web page content to other cards selected by the user for display, or adjusting the display position of the content in the first card set.
  • the electronic device may receive a drag operation for the text content 4121 in the text card 412 included in the user interface 1210 shown in FIG. 12(A) , and when the user's finger leaves the screen, the electronic device The picture card 421 in the user interface 1220 shown in (B) of FIG. 12 is also displayed.
  • the electronic device moves the text content 4121 to the picture card 421 for display (that is, the display card of the text content 4121 is moved from the picture card 421 to the picture card 421 for display). Text card 412 is adjusted to picture card 421).
  • the electronic device may create a new card (which may be called a custom card) in the first card set in response to a second user operation on the first card set.
  • a new card which may be called a custom card
  • the electronic device may create a new card in response to a click operation on the new control 415 in the user interface 410 shown in FIG. 14(A) , as shown in FIG. 14(B) .
  • the electronic device may add the web page content included in the first card set to the customized card in response to the user operation.
  • Adding methods may include, but are not limited to, the following three situations:
  • the electronic device can add all content included in the card selected by the user to the customized card.
  • the text card corresponding to option 1513A and the picture card corresponding to option 1514A in the user interface 1510 shown in (A) of FIG. 15 have been selected by the user, and the electronic device can change ((A) of FIG. 4
  • the contents included in the text card 412 in the user interface 410 shown in A) and the picture card 421 in the user interface 420 shown in (B) of Figure 4 are added to the custom card.
  • the custom card is as shown in Figure 15 Card 1410 in user interface 1520 shown in (B).
  • the electronic device may add content from the first set of cards selected by the user to the custom card.
  • the text content 4121 corresponding to option 1613A and the picture content 4211 corresponding to option 1614A in the user interface 1610 shown in Figure 16(A) have been selected by the user, and the electronic device can convert the text content 4121 and picture content 4211 are added to the custom card.
  • the custom card is the card 1410 in the user interface 1620 shown in (B) of Figure 16 .
  • the electronic device may move the content into the custom card in response to a second user operation on any content in any card in the first set of cards.
  • the electronic device may move the text content 4121 to the text card 412 in response to a drag operation on the text card 412 included in the user interface 1210 shown in (A) of FIG. 17A .
  • the custom card is the card 1410 in the user interface 1710 shown in (B) of FIG. 17A .
  • S112 The electronic device saves the information of the first card set.
  • the electronic device may save the information of the first card set in response to user operation.
  • the electronic device may save the information of the first card set in response to a touch operation (eg, a click operation) on the save control 414 in the user interface 410 shown in (A) of FIG. 4 .
  • a touch operation eg, a click operation
  • the electronic device can save the information of the first card set into the memory of the electronic device, and subsequently obtain the information of the first card set from the memory for secondary loading of the first card set.
  • the electronic device can send the information of the first card set to the cloud server for storage, and subsequently obtain the first card set from the cloud server. Card set information for secondary loading of the first card set.
  • the information of the first card set stored by the electronic device may include, but is not limited to: basic information, resource information, location information, web page information and other information of the first card set.
  • the basic information is the first keyword searched by the user.
  • the resource information includes the number of cards in the first card set, and multiple webpage contents in the first card set (which can be understood as a collection of webpage content).
  • the resource information includes the original file of the webpage content.
  • Resource information includes information related to non-text web content such as pictures, videos, audios and documents, for example but not limited to including at least one of the following: name, title, comments, cover image, thumbnail, original file address information (such as super link) etc.
  • the position information includes the display position of the card in the first card set (for example, which card is in the first card set), and the display position of each web page content in the first card set (for example, which card is in the first card set). which webpage content of the card), optionally, the location information can be used to determine the location of drawing the card and webpage content when the electronic device subsequently displays the first set of cards.
  • the web page information includes the address information (such as URL) of the web page to which each web page content belongs.
  • the web page information can be used to implement the function of displaying the content page of the web page to which the web page content belongs. For example, the electronic device can respond to the first Double-click any content in the card set to jump to the web page using the address information of the web page to which the content belongs in the web page information. At this time, the specific content of the web page can be displayed.
  • the first card set includes three cards: the text card 412 in the user interface 410 shown in FIG. 4(A), the picture card 421 in the user interface 420 shown in FIG. 4(B), and FIG. Video card 431 in user interface 430 shown in (C). Therefore, the basic information of the first card set saved by the electronic device is the keyword searched by the user, that is, "Xi'an Travel Route" displayed in the title 411 of the user interface 410 .
  • the resource information of the first card set saved by the electronic device is a collection of web content in the first card set, specifically including: text content 4121, text content 4122 and text content 4123 in the text card 412, and picture content 4211 in the picture card 421.
  • the position information of the first card set saved by the electronic device includes the display position of the card in the first card set.
  • the text card 412 is the first card in the first card set.
  • the location information also includes the display location of each web page content in the resource information in the first card set.
  • the text content 4121 is the first content in the first card in the first card set (ie, the text card 412).
  • the web page information of the first card set saved by the electronic device includes the address information of the web page to which each web page content in the resource information belongs, for example, the URL of web page 1 to which the text content 4121 belongs, that is, the source information corresponding to the text content 4121 in the user interface 410 4121A includes "URL 111".
  • S113 The electronic device displays at least one entrance control.
  • any entry control is used to trigger display of the first card set.
  • the electronic device displaying at least one entry control can also be understood as the electronic device setting at least one way to reload the first card set.
  • Any entry control can implement a way to reload the first card set.
  • the electronic device can determine the above-mentioned at least one entry control in response to user operation, which can be understood as an entry through which the user can select to reload the first card set.
  • user operation can be understood as an entry through which the user can select to reload the first card set.
  • the electronic device can determine the above-mentioned at least one entry control in response to user operation, which can be understood as an entry through which the user can select to reload the first card set.
  • the electronic device can determine the above-mentioned at least one entry control in response to user operation, which can be understood as an entry through which the user can select to reload the first card set.
  • the entry control may be generated first.
  • a specific example is as follows:
  • the electronic device can generate/create a clickable component (ie, entry control) in the favorites of the browser application, and the component can store identification information of the first card set, and the identification information can be used by the electronic device to receive When a user operates on this component (such as a click operation), determine the first set of cards to be displayed/loaded.
  • a clickable component ie, entry control
  • the component can store identification information of the first card set, and the identification information can be used by the electronic device to receive When a user operates on this component (such as a click operation), determine the first set of cards to be displayed/loaded.
  • the electronic device can generate/create an entry control in the form of a widget on the desktop or a negative-screen application, and store the corresponding relationship between the entry control and the identification information of the first card set.
  • the identification information can be used by the electronic device to receive a message for the first card set.
  • the electronic device can generate a picture in the gallery, and the picture can include a QR code or other forms of content.
  • the QR code contains the identification information of the first card set.
  • the electronic device can obtain the identification information of the first card set by recognizing the QR code, thereby determining the first card set to be displayed/loaded.
  • the user interface 1920 shown in (B) of FIG. 19A is the favorites interface of the browser application, and the option 1922A in the user interface 1920 is a card indicating that the basic information is "Xi'an Travel Route" Set of entry controls.
  • the user interface 1930 is a favorites interface of a browser application
  • the list 1934 in the user interface 1930 includes an entry control (ie, option 1934A) indicating a card set whose basic information is "Xi'an Tour Route”
  • the entrance control indicating the card set whose basic information is "Wuhan Travel Route”.
  • the user interface 2010 shown in Figure 20 (A) is a desktop
  • the control 2011 in the user interface 2010 is an entry control indicating a card set whose basic information is "Xi'an Tour Route".
  • the user interface 2110 shown in Figure 21 (A) is a user interface for a one-screen application
  • the control 2111 in the user interface 2110 is a card indicating that the basic information is "Xi'an Tour Route" Set of entry controls.
  • the user interface 2220 shown in Figure 22(B) is the user interface of the gallery application
  • the picture 2221 in the user interface 2220 is used to indicate a card set whose basic information is "Xi'an Travel Route"
  • the image recognition control 2222 in the user interface 2220 is the entry control of the card set.
  • S114 The electronic device receives a third user operation for the first entry control.
  • the first entry control is any one of the above-mentioned at least one entry control.
  • S115 The electronic device displays the first card set.
  • the electronic device may display the first set of cards in response to a third user operation.
  • the electronic device may open an application (eg, a browser) for implementing the display function of the first set of cards. , and display the first set of cards in the user interface of the application.
  • the electronic device can call a display component in the application where the first entry control is located, and display the first set of cards through the display component.
  • the electronic device may receive a touch operation on option 1922A in the user interface 1920 shown in FIG. 19A(B) and display the user interface 410 shown in FIG. 19A(C) .
  • the electronic device may receive a touch operation on option 1934A in the user interface 1930 and display the user interface 410 shown in (C) of FIG. 19A .
  • the electronic device may receive a touch operation on the control 2011 in the user interface 2010 shown in FIG. 20(A) and display the user interface 410 shown in FIG. 20(C) .
  • the electronic device may receive a touch operation on the control 2111 in the user interface 2110 shown in FIG. 21(A) and display the user interface 410 shown in FIG. 21(C) .
  • the electronic device may receive a touch operation on the image recognition control 2222 in the user interface 2220 shown in FIG. 22(B) and display the user interface 410 shown in FIG. 22(C) .
  • the electronic device may also display the user interface 410 shown in (C) of FIG. 22 in response to a user operation for scanning the picture 2221 in the user interface 2220 shown in (B) of FIG. 22 , such as Use the swipe function in a chat app or payment app able.
  • the third user operation is also used to select card 1 in the first card set
  • S115 may include: the electronic device displays card 1 in the first card set in response to the third user operation.
  • the electronic device responds to the card control 2011C (used to display text cards) in the user interface 2010 shown in FIG. 20(A) , and displays the message shown in FIG. 4(A) .
  • User interface 410 The electronic device displays the user interface 420 shown in FIG. 4(B) in response to the card control 2011C (for displaying picture cards) in the desktop 2020 shown in FIG. 20(B).
  • the method further includes: the electronic device can switch the card set displayed in the first portal control in response to a user operation on the first portal control.
  • the electronic device may respond to a sliding operation on the control 2011 (used to display the basic information as "Card Set of Xi'an Tourist Routes") in the desktop 2020 shown in (B) of FIG. 20 , the user interface 2030 shown in (D) of Figure 20 is displayed, and the control 2011 in the user interface 2030 is used to display a card set whose basic information is "Wuhan Tourist Route".
  • S115 includes: the electronic device can obtain the saved information of the first card set according to the identification information of the first card set, and display the first card set according to the information.
  • the identification information of the first card set may be determined based on the first entry control.
  • the electronic device stores a corresponding relationship between the first entry control and the identification information of the first card set.
  • the identification information of the first card set corresponding to the first entry control is determined based on the corresponding relationship.
  • the identification information is determined based on the corresponding relationship.
  • the identification information of card set 1 is determined based on the corresponding relationship. What comes out is the identification information of card set 1.
  • the first entry control displays the content of card set 2
  • the identification information of card set 2 what is determined based on the corresponding relationship is the identification information of card set 2.
  • the electronic device can obtain the first keyword based on the saved basic information of the first card set, and determine the number of cards included in the first card set and the number of cards in each card based on the saved resource information of the first card set.
  • Web page content determine the drawing position/order of cards and web page content in the resource information based on the saved location information of the first card set, and implement viewing of the web page to which the web page content in the first card set belongs based on the saved web page information of the first card set. The function of the content page.
  • the information of the first card set is stored in a memory of the electronic device, and the electronic device can obtain the information of the first card set from the memory.
  • the information of the first card set is stored in the cloud server, and the electronic device can send a request message (for example, including the identification information of the first card set) to the cloud server, and receive a response from the cloud server based on the request message.
  • a request message for example, including the identification information of the first card set
  • the user can still view and edit the first set of cards in S115, such as but not limited to double-clicking the content to view the content page of the webpage to which it belongs, sliding to view more content in the card, and dragging the card to delete it.
  • Card drag the card to adjust the display position of the card, merge multiple cards into one card, long press to enter the card editing interface, click to enter the content editing interface, click to delete the content in the card, drag the content to adjust the content Display position in the card, drag content to adjust the display card of the content, create a new card, add menu selection content to the new card, drag content to move to the new card, etc.
  • the instructions in S109-S111 No longer.
  • any web page in the first web page collection may include content related to the first keyword, or may include content unrelated to the first keyword, which increases the difficulty for the user to find content that meets the search intention.
  • This application can filter out the content related to the first keyword from the web content included in the first web page collection, and display the filtered content through the first card set, so that the user can quickly obtain the search results through the first card set. It can find valuable content that meets the user's search intent without having to click on multiple web pages and browse the entire web page, reducing the time users spend sorting out search results and improving search efficiency.
  • the first card set can include various types of web content such as pictures, videos, audios, documents, etc., and is not limited to text types. The content is richer, more in line with user needs, and the user experience is better.
  • this application supports viewing, modifying, saving and secondary viewing/loading of the first card set to meet the personalized needs of users.
  • secondary viewing/loading for example, the entrance to the secondary loading card set is displayed through the browser's favorites, desktop, negative screen, gallery and other applications), which is highly flexible and increases the user's ability to obtain the first card set again. The speed and accuracy of the card set make it more convenient for users to use.
  • Figure 30 is a schematic flowchart of yet another search method provided by an embodiment of the present application.
  • the method may include but is not limited to the following steps:
  • S201 The electronic device receives a fourth user operation for the first web page in the first web page set.
  • S201 is an optional step.
  • the first set of web pages is the first set of web pages described in S103-S105 shown in Figure 28.
  • the electronic device may execute S101-S105 shown in Figure 28.
  • the first web page is any web page in the first web page set.
  • the web page list in the user interface 320 shown in FIG. 23(A) shows the summary information of three web pages in the first web page set (which can be called web page cards): the web page card 322 indicating web page 1, the web page card 322 indicating web page 1, and the web page card 322 indicating web page 1.
  • the electronic device can receive a touch operation on the web page card 322, and the web page 1 indicated by the web page card 322 is the first web page.
  • S202 The electronic device requests the network device to obtain the display data of the first web page.
  • the electronic device may, in response to the fourth user operation, send a request message to the network device to request acquisition of the display data of the first web page.
  • S203 The network device sends the first web page data to the electronic device.
  • the network device may send the first web page data, that is, the display data of the first web page, to the electronic device based on the request message sent by the electronic device.
  • S204 The electronic device displays the first web page according to the first web page data.
  • the electronic device can display the specific content of the web page 1 indicated by the web page card 322 , that is, display (B) of FIG. 23 User interface 2300 is shown.
  • S205 The electronic device receives the fifth user operation.
  • the fifth user operation is used to trigger viewing of a second set of cards for the currently displayed first web page, the second set of cards including content in the first web page.
  • the fifth user operation is a click operation on the card control 2310 in the user interface 2300 shown in (B) of FIG. 23 .
  • S206 The electronic device obtains the third content collection from the content of the first web page.
  • the electronic device can filter out the third content collection that meets the user's search intention from the content of the first web page.
  • the content included in the third content set is related to the first keyword.
  • the content included in the third content set is the content of the first web page that has a strong correlation with the first keyword.
  • the way in which the electronic device obtains the third content set is similar to the way in which the first content set is obtained in S107 of Figure 28.
  • the way in which the electronic device obtains the third content set is exemplified. It should be noted that the following mainly describes the second content set. Regarding the differences between them, for other explanations, please refer to the description of the method of obtaining the first content set in S107 of Figure 28 .
  • the electronic device can first calculate the similarity between each content in the first web page and the first keyword, and then filter out web content whose similarity is greater than or equal to a preset threshold. These filtered out content can Constitute a third content collection.
  • the electronic device may first calculate the similarity between each content in the first web page and the first keyword, Then compare the similarities of webpage contents belonging to the same type, and determine the webpage contents ranked in the top M positions (M is a positive integer). These determined webpage contents can form a third content collection.
  • S207 The electronic device generates a second card set based on the third content set.
  • the second card set may include at least one card.
  • any card included in the second card set may include one type of web content.
  • multiple web content such as text, pictures, videos, audios, or documents are displayed on different cards.
  • the division type can be more detailed, for example, static pictures and dynamic pictures are displayed on different cards.
  • the division type can be more coarse, for example, video, audio, document, etc. The files are displayed on the same card.
  • the third content collection described in S206-S207 includes: text 1 ("Popular Travel” in the user interface 2300 The route is"), Text 2 ("Go to Xi'an and you must go to the Mausoleum of Qin Shi Huang... The ticket to Qin Shi Huang's Mausoleum is 120 yuan... The fare to Qin Shi Huang's Mausoleum is 20 yuan."), Text in web page 1 3 (“Popular Attractions in Xi'an”) (not shown in the user interface 2300), picture 1 in the user interface 2300, and video 1 in web page 1 (not shown in the user interface 2300).
  • the second card set described in S207-S208 may include three cards, and the three cards include text content, picture content and video content respectively.
  • the three cards include text content, see text card 2413 (used to display text 1, text 2, and text 3) in the user interface 2410 shown in (A) of Figure 24
  • the picture card 2421 used to display picture 1 in the user interface 2420 shown in FIG. ).
  • the electronic device can determine the content of the web page based on the correlation (for example, characterized by similarity) between the web page content in the card and the first keyword. Display order, optionally, the stronger the relevance, the higher the priority of the display order of the web page content in the card. For example, you can refer to the text card 2413 in the user interface 2410 shown in (A) of Figure 24. In order from high to low, the multiple text contents displayed in the text card 2413 are arranged according to the correlation with the first keyword "Xi'an travel route".
  • text content 2413A (“Popular tourist routes include?”
  • text content 2413B (“You must go to Qin Shi Huang's Mausoleum when going to Xi'an... The ticket to Qin Shi Huang's Mausoleum is 120 yuan... To go to Qin Shi Huang's Mausoleum... The fare is 20 yuan.”
  • text content 2413C (“Popular attractions in Xi'an.”).
  • the user can view and edit the second card set, such as but not limited to double-clicking the content to view the content page of the webpage to which it belongs, sliding to view more content in the card, dragging the card to delete the card, dragging Card to adjust the display position of the card, merge multiple cards into one card, long press to enter the card editing interface, click to enter the content editing interface, click to delete the content in the card, drag the content to adjust the content in the card Display position, drag content to adjust the display card of the content, create a new card, add menu selection content to the new card, drag content to move to the new card, etc.
  • the first card in S109-S111 of Figure 28 The description and specific examples of the set are similar to Figure 6, Figure 7A, Figure 7B, Figure 8-16, Figure 17A and Figure 17B, and will not be described again.
  • S209 The electronic device receives a sixth user operation for the first content in the first web page.
  • the first content is any web page content included in the first web page.
  • the first content is the text "Mausoleum of the First Emperor of Qin" in the user interface 2300 shown in (A) of FIG. 25A
  • the sixth user operation is for the function list 2330 included in the user interface 2300 Save touch operation of option 2330A.
  • the first content is a picture in the user interface 2300 shown in (A) of FIG. 26 2340A
  • the sixth user operation includes a sliding operation on the picture 2340A
  • the first content is the picture 2340A ( in the user interface 2300 shown in (A) of FIG. 27A ), and the sixth user operation is a drag operation acting on the picture 2340A.
  • the electronic device obtains at least one second content associated with the first content from the content of the first web page.
  • the electronic device can obtain at least one second content associated with the first content according to preset rules. For example, if the first content is the picture 2340A in the user interface 2300 shown in Figure 26(A), then two second contents can be obtained: the title 2340B of the picture 2340A shown in the user interface 2300 ("Xi'an popular tourist attractions” ") and caption 2340C of picture 2340A ("The picture above mainly shows.").
  • the second content may be web content of the first web page that has a strong correlation with the first content (for example, represented by similarity).
  • the electronic device can first calculate the similarity between each content in the first web page and the first content, and then filter out the web content whose similarity is greater than or equal to a preset threshold as the second content, and calculate the similarity. Please refer to the description of the similarity in S107 of FIG. 28 . It is not limited to this.
  • the electronic device can also filter out the web content ranked in the top M positions as the second content (M is a positive integer). For example, if the first content is the text "Xi'an Tourist Route One:", then the second content can be "Xi'an Technology Route Two:!”.
  • the first content is the text "Qin Shi Huang's Mausoleum” included in the text 2320 in the user interface 2300 shown in (A) of Figure 25A
  • two second contents can be obtained: the text "Tickets for Qin Shi Huang's Mausoleum” included in the text 2320 120 Yuan”
  • text 2320 includes the text "Ticket to Qin Shihuang's Mausoleum 20 Yuan”.
  • the electronic device displays the second card set according to the first content and the above-mentioned at least one second content.
  • the electronic device can first create a new card in the second card set (which can be called a custom card), and display the first content and the above-mentioned at least one second content in the custom card.
  • the cards are cards other than the cards included in the second card set in S208.
  • a customized card may only include the web content of one web page.
  • the electronic device may create a new card in the second card set.
  • Custom card 1 custom card 1 is used to display the content of the first web page.
  • the electronic device can create a new custom card 2 in the second card set.
  • the custom card 2 is used to display the content of the second webpage.
  • the electronic device can display the information of the first web page when displaying the custom card 1 to indicate that the custom card 1 is used to display the content of the first web page. For example, assume that the first web page is as shown in (A) of Figure 25A
  • the user interface 2300 displays the web page 1.
  • the electronic device can display a card 2512 (ie, customized card 1) through the user interface 2510 shown in (B) of Figure 25A.
  • the card 2512 includes text content 2512A (used to display the first content), associated content 2512B and associated content 2512C ( for displaying two secondary contents).
  • the user interface 2510 also includes title information 2511.
  • the title information 2511 displays the title of the web page 1 "Xi'an Technology Route Map", the website information "website 111" and the source information "source aaa”.
  • the electronic device can display the first content and the second content in the custom card according to the order in which the first content and the second content are displayed in the first web page. Specific examples are as follows:
  • the picture 2340A has a title 2340B ("Xi'an Popular tourist Attractions") displayed above and a description 2340C displayed below. ("The above figure mainly shows.") Therefore, the custom card 2512 included in the user interface 2620 shown in (B) of Figure 26 displays in order from top to bottom: picture 2340A, title 2340B, and description 2340C.
  • the electronic device may also determine whether the first content and the second content are in the custom card based on the correlation between the first content, the second content and the first keyword (for example, represented by similarity).
  • the display order in , optionally, the stronger the relevance, the higher the priority of the display order of the web page content in the custom card.
  • the electronic device can also display the first content and the second content in the existing cards in the second card set, in the same manner as the above-mentioned display of the first content in the custom card.
  • Content works in a similar way to secondary content.
  • the electronic device may determine to display the first content and the second content according to the types of the first content and/or the second content. Cards with two contents, for example, when the first content is text, the first content and the second content are displayed on the card including the text content; or, when the first content is text and the second content is a picture, the first content and the second content are displayed on the card including the text content.
  • the first content is displayed on the card, and the second content is displayed on the card including image content.
  • the electronic device may also determine the cards in the second card set used to display the first content and the second content in response to the user operation. For example, in the embodiment shown in FIG. 27A , in the user interface 2700 shown in (B) of FIG. 27A , the selection control 2714A displayed in the option 2714 of the custom card is in a selected state, which can indicate that the custom card has been selected. The electronic device can save the first content and the second content into a custom card. At this time, the custom card is the card 2512 shown in the user interface 2620 shown in (C) of FIG. 26 .
  • the electronic device may not receive the fifth user operation (ie, do not perform S205), receive the sixth user operation after S204 (ie, perform S209), and the electronic device may respond
  • S206-S207 and S210 are executed, and then the second card set is displayed (for details, please refer to the description of S208 and S211), where the order of S206-S207 and S210 is not limited.
  • the sixth user operation is a click operation on the save option 2330A in the user interface 2300 shown in (A) of FIG. 25A.
  • the electronic device may generate a second card set in response to the sixth user operation.
  • the second card set includes four cards: the text card 2413 in the user interface 2410 shown in (A) of Figure 24 , the text card 2413 shown in (B) of Figure 24
  • the custom card 2512 in the user interface 2510 shown in Figure 25A(B) wherein,
  • the first three cards are used to display the third content collection
  • the fourth card is used to display the first content and the above-mentioned at least one second content.
  • the electronic device can also perform S209-S211 first, and then perform S205-S208.
  • S211 is specifically: the electronic device performs the following steps according to the first content and at least one third content: The second content generates a second card set, the second card set includes at least one card, and the at least one card includes the first content and at least one content.
  • S207 is specifically: the electronic device generates at least one card in the second card set according to the third content set. For the description of the at least one card, please refer to the description of the cards included in the second card set in S207.
  • S211 specifically includes: the electronic device generates a custom card based on the first content and at least one second content.
  • the custom card includes the first content and at least one second content.
  • the custom card is a picture.
  • S211 specifically includes: the electronic device generates multiple cards based on the first content and at least one second content.
  • the multiple cards include the first content and at least one second content.
  • the multiple cards For displaying different types of content the specific description is similar to the description for different cards in S109 of Figure 28 for displaying different types of web content, and will not be described again.
  • the plurality of cards are the text card 2811 in the user interface 2810 shown in (A) of Figure 27B, and the picture card in the user interface 2820 shown in (B) of Figure 27B Film 2821.
  • the electronic device may only execute S209-S211 and not execute S205-S208.
  • the electronic device may only execute S205-S208 and not execute S209-S211.
  • S212 The electronic device saves the information of the second card set.
  • S212 is similar to S112 in Figure 28 .
  • the customized cards in the second card set only include the content of the first webpage, and the electronic device can save the information of the first webpage, such as but not limited to including the title and address information (such as URL) of the first webpage. etc., the information of the first web page can be used to be displayed together when displaying the custom card (for specific examples, see the description in S211 that the electronic device can display the information of the first web page when displaying the custom card 1).
  • the electronic device can also display at least one entry control (used to trigger the display of the second card set).
  • the specific description is similar to S113 in Figure 28 and will not be described again.
  • the electronic device can display the second set of cards in response to a user operation on any entry control.
  • the specific description is similar to S114-S115 of Figure 28 and will not be described again.
  • the electronic device can respond to the user operation to cancel adding the first content or any second content to the second card set, for example, in the implementation shown in Figure 25A,
  • the text card 2511 in the user interface 2510 shown in (B) of Figure 25A includes text content 2512A (used to display a first content), text content 2512B (used to display a second content) and text content 2512C (used to display a second content). another secondary content).
  • the delete option 2512D in the user interface 2510 is used to trigger the deletion of the text content 2512A
  • the delete option 2512E in the user interface 2510 is used to trigger the deletion of the text content 2512B
  • the delete option 2512F in the user interface 2510 is used to trigger the deletion of the text content 2512C.
  • the user can choose whether to retain the selected first content and any second content associated with the first content, which makes the use more flexible and the user experience better.
  • any web page in the first web page collection may include content related to the first keyword, or may include content unrelated to the first keyword, which increases the difficulty for the user to find content that meets the search intention.
  • this application can filter out the content related to the first keyword from the content of the first webpage after the user chooses to view the specific content of the first webpage, and display the filtered content through the second card set, so that the user can Quickly obtain valuable content in the first webpage that meets the user's search intention through the second card set, without requiring the user to browse the entire webpage, reducing the time the user spends sorting out search results, improving search efficiency, and providing a better user experience.
  • the second card set like the first card set, can include multiple types of content. It also supports viewing, modification, saving and secondary viewing/loading, so it has the same effect as the first card set. See Figure 28 for details. The effects of the first card set will not be described in detail.
  • this application supports users to customize the selection of content in the first webpage and add it to the second card set. That is to say, the content automatically filtered by the electronic device in the same webpage and the content selected by the user can be saved in the same function window ( That is, the second card set) has strong flexibility and greatly facilitates users' reading and secondary viewing/loading.
  • the electronic device can also filter out the second content related to the first content from the content of the first web page, and display the first content and the second content through the second card set, eliminating the need for the user to manually search and add the second content, further reducing the need for Time to get content that matches the user's search intent.
  • the electronic device may not filter the second content related to the first content, and the second card set may not display the second content, which is not limited in this application.
  • the electronic device can not only extract the webpage content related to the search keywords in the multiple webpages returned by the search function, but also can extract the webpage content related to the search keywords in the webpages viewed by the user. , and display these extracted contents to users in the form of a card set.
  • electronic devices can also connect users to The selected content in the web content page and the content associated with the selected content in the content page are displayed together in the form of a card set. Therefore, users can quickly obtain content in the search results that meets their search intent simply through the card set, which greatly reduces the time it takes for users to sort through search results and find what they need.
  • the card set can support reading, viewing the original webpage to which the content belongs, performing editing operations such as adding, deleting, modifying, adjusting order, saving and secondary loading, etc., to meet the various needs of users and make it more convenient to use.
  • first and second are only used for descriptive purposes and cannot be understood as implying or implying relative importance or implicitly specifying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of this application, unless otherwise specified, “plurality” The meaning is two or more.
  • the methods provided by the embodiments of this application can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a network device, a user equipment, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmit to another website, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media can be magnetic media (for example, floppy disks, hard disks, tapes ), optical media (for example, digital video disc (DWD)), or semiconductor media (for example, solid state disk (SSD), etc.).
  • magnetic media for example, floppy disks, hard disks, tapes
  • optical media for example, digital video disc (DWD)
  • semiconductor media for example, solid state disk (SSD), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种搜索方法及电子设备,该方法应用于电子设备,该方法包括:获取用户输入的第一关键词;向网络设备发送包括第一关键词的第一搜索请求;接收网络设备基于第一搜索请求发送的搜索结果集合;显示第一界面,第一界面包括搜索结果集合,搜索结果集合包括和第一网页相关的第一搜索结果、和第二网页相关的第二搜索结果;接收第一用户操作;响应第一用户操作,生成第一卡片集,第一卡片集包括第一卡片,第一卡片包括第一网页中的第一内容和第二网页中的第二内容;在第一用户操作之后,显示第二界面,第二界面包括第一卡片。本申请能将多个网页中的多个内容整合到一起并以卡片的形式提供给用户,减少用户获取所需内容的时间,提升搜索效率。

Description

一种搜索方法及电子设备
本申请要求于2022年06月24日提交中国专利局、申请号为202210724383.1、申请名称为“一种搜索方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种搜索方法及电子设备。
背景技术
用户通过终端的搜索功能(例如搜索引擎提供的搜索功能)输入关键词并进行搜索时,终端返回的是包括多个网页的概要信息的网页列表,为了获取所需内容,用户需要逐个点击网页列表包括的网页以查看网页的详细内容,一个网页包括的内容通常很多,而终端(尤其是体积较小的手机等终端)一次展示的内容有限,因此,用户需要花费大量的时间去寻找所需内容,搜索效率很低,用户体验感差。
发明内容
本申请公开了一种搜索方法及电子设备,能够将至少一个网页中的多个内容整合到一起提供给用户,减少用户获取所需内容的时间,提升搜索效率。
第一方面,本申请提供了一种搜索方法,应用于电子设备,该方法包括:获取用户输入的第一关键词;向网络设备发送第一搜索请求,所述第一搜索请求包括所述第一关键词;接收所述网络设备基于所述第一搜索请求发送的搜索结果集合;显示第一界面,所述第一界面包括所述搜索结果集合,所述搜索结果集合包括第一搜索结果和第二搜索结果,所述第一搜索结果和第一网页相关,所述第二搜索结果和第二网页相关;接收第一用户操作;响应所述第一用户操作,生成第一卡片集,所述第一卡片集包括第一卡片,所述第一卡片包括第一内容和第二内容,所述第一网页包括所述第一内容,所述第二网页包括所述第二内容;在所述第一用户操作之后,显示第二界面,所述第二界面包括所述第一卡片。
在上述方法中,电子设备可以自动提取第一搜索结果中的第一内容,以及第二搜索结果中的第二内容,并通过第一卡片集中的第一卡片展示提取出的内容,让用户可以通过第一卡片集快速获取到搜索结果中符合用户搜索意图的内容,而无需点击多个网页卡片和浏览完整个网页,减少用户整理搜索结果时执行的用户操作和花费的时间,大大提高了搜索效率。
在一种可能的实现方式中,所述第一内容和所述第二内容均与所述第一关键词相关联。
例如,所述第一内容和所述第一关键词的相似度大于或等于预设阈值,所述第二内容和所述第一关键词的相似度大于或等于预设阈值。
在上述方法中,第一卡片集中的第一卡片包括的内容和用户输入的第一关键词相关,避免搜索结果中和第一关键词无关的内容影响用户整理搜索结果,进一步提升搜索效率。
在一种可能的实现方式中,上述方法还包括:接收作用于所述第一界面中的所述第一搜索结果的第二用户操作;显示所述第一网页;接收第三用户操作;响应所述第三用户操作,生成第二卡片集,所述第二卡片集包括第二卡片,所述第一卡片包括第三内容和第四内容, 所述第三内容和所述第四内容均为所述第一网页的内容;显示第三界面,所述第三界面包括所述第二卡片。
在一种可能的实现方式中,所述第三内容和所述第四内容与所述第一关键词相关联。例如,所述第三内容和所述第一关键词的相似度大于或等于预设阈值,所述第四内容和所述第一关键词的相似度大于或等于预设阈值。
在上述方法中,用户打开第一网页后,电子设备可以从第一网页中筛选出和第一关键词相关联的第三内容和第四内容,并通过第二卡片集中的第二卡片展示筛选出来的内容,让用户可以通过第二卡片集快速获取到第一网页中符合用户搜索意图的内容,无需浏览完整个第一网页,减少用户整理搜索结果时执行的用户操作和花费的时间,提升搜索效率。
在一种可能的实现方式中,所述方法还包括:接收用户作用于所述第一网页上的选择操作;根据所述选择操作获取第一信息;接收第四用户操作;响应所述第四用户操作,生成第三卡片,第三卡片为所述第二卡片集中的卡片,所述第三卡片包括第五内容和第六内容,所述第五内容和所述第六内容均为所述第一网页的内容,所述第五内容和所述第六内容均与所述第一信息相关联;显示第四界面,所述第四界面包括所述第三卡片。
在上述方法中,用户可以自定义选择第一网页中的第一信息,电子设备可以从第一网页中筛选出和第一信息相关联的第五内容和第六内容,并通过第二卡片集中的第三卡片展示筛选出来的内容,用户自定义选择网页内容提升了搜索的灵活性,并且,无需用户人工查找第五内容和第六内容以及将这些内容添加到记事本、便签等存储位置中,减少用户整理搜索结果时执行的用户操作和花费的时间,使用起来更加方便快捷。
在一种可能的实现方式中,所述第一卡片集还包括第四卡片,所述第一卡片包括第一类型的内容,所述第四卡片包括第二类型的内容,所述第一类型和所述第二类型不同。
例如,第一卡片包括文本类型的内容,第四卡片包括图片类型的内容。
在上述方法中,不同类型的内容可以通过不同的卡片展示,无需用户手动将同一类型的内容整理到一起,浏览起来更加方便,也提升了用户获取所需内容的效率,例如,用户可能只想获取文本类型的内容,因此,用户可以只查看包括文本类型的内容的卡片就可以快速获取到所需内容。
在一种可能的实现方式中,所述生成第一卡片集,包括:接收第五用户操作;响应所述第五用户操作,从所述搜索结果集合中选择所述第一搜索结果和所述第二搜索结果;根据所述第一搜索结果和所述第二搜索结果生成所述第一卡片集。
在上述方法中,用于生成第一卡片集的第一搜索结果和第二搜索结果可以是用户自定义选择的,可以满足用户的个性化需求,用户体验感更好。
在一种可能的实现方式中,当所述第一内容和所述第一关键词的相似度大于所述第二内容和所述第一关键词的相似度时,所述第一内容位于所述第二内容之前;或者,当所述第一搜索结果位于所述第二搜索结果之前时,所述第一内容位于所述第二内容之前。
在上述方法中,和第一关键词的相似度较大的内容可以显示在较前的位置,用户可以优先看到电子设备预测的更符合用户的搜索意图的内容,减少用户查找所需内容的时间,进一步提高搜索效率。或者,卡片中的内容的显示顺序可以和对应的搜索结果的显示顺序一致,统一搜索结果集合和卡片集的显示效果,用户浏览体验更好。
在一种可能的实现方式中,所述显示第二界面之后,所述方法还包括:显示第五界面,所述第五界面包括第一控件,所述第五界面为所述电子设备的桌面,负一屏或第一应用的收藏界面,所述第一控件与所述第一卡片集相关联;接收作用于所述第一控件的第六用户操作; 显示第六界面,所述第六界面包括所述第一卡片集中的第五卡片。
在一种可能的实现方式中,所述显示第二界面之后,所述方法还包括:显示图库的用户界面,所述图库的用户界面包括第一图片,所述第一图片和所述第一卡片集相关联;接收用于识别所述第一图片的用户操作;显示所述第六界面。
在上述方法中,用户可以通过桌面、负一屏、第一应用的收藏夹或者图库中的图片再次查看第一卡片集,获取方式多种多样,灵活性强,增加了用户再次查看第一卡片集的概率,功能可用性强。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第二界面中的所述第一内容的第七用户操作;显示所述第一网页。
例如,所述第七用户操作为单击操作或双击操作。
在上述方法中,用户可以操作第一卡片中的任意一个内容,以查看该内容所属的网页,无需用户手动查找该内容对应的搜索结果和点击该搜索结果,用户使用起来更加方便。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第二界面中的所述第一卡片的第八用户操作;响应所述第八用户操作,删除所述第一卡片;或者,接收作用于所述第二界面中的所述第一内容的第九用户操作;响应所述第九用户操作,删除所述第一内容。
例如,第八用户操作为向上或向下拖动的用户操作,或者,第八用户操作为作用于所述第一卡片对应的删除控件的点击操作。
例如,第九用户操作为作用于所述第一内容对应的删除控件的点击操作。
在上述方法中,用户可以删除第一卡片集中的任意一个卡片,用户也可以删除第一卡片中的任意一个内容,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第二界面中的所述第一内容的第十用户操作;响应所述第十用户操作,修改所述第一内容为第七内容;在所述第一卡片中显示所述第七内容。
在上述方法中,用户可以修改第一卡片中的任意一个内容,无需用户手动复制该内容到记事本等位置中再进行修改,减少用户操作,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片中的所述第一内容位于所述第二内容之前;所述方法还包括:接收作用于所述第二界面中的所述第一内容的第十一用户操作;响应所述第十一用户操作,调整所述第一内容和所述第二内容在所述第一卡片中的显示位置;显示所述调整后的所述第一卡片,所述调整后的所述第一卡片中的所述第一内容位于所述第二内容之后。
例如,所述第十一用户操作为将第一内容拖动至所述第二内容所在位置的用户操作。
在上述方法中,用户可以调整第一卡片中的内容的显示顺序,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片集还包括第六卡片;所述方法还包括:接收作用于所述第二界面中的所述第一内容的第十二用户操作;响应所述第十二用户操作,将所述第一内容从所述第一卡片中移动至所述第六卡片中;响应所述第十二用户操作,所述第一卡片不包括所述第一内容,所述第六卡片包括所述第一内容。
例如,所述第十二用户操作为向左或向右拖动的用户操作。
在上述方法中,用户可以将第一卡片中的第一内容移动至第一卡片集中的其他卡片中,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片集还包括第七卡片;所述方法还包括:接收作 用于所述第二界面中的所述第一卡片的第十三用户操作;响应所述第十三用户操作,将所述第一卡片和所述第七卡片合并为第八卡片其中,所述第八卡片包括所述第一卡片中的内容和所述第七卡片中的内容。
例如,所述第十三用户操作为将第一卡片拖动至第七卡片所在位置的用户操作。
在上述方法中,用户可以合并第一卡片集中的任意两个卡片,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:接收第十四用户操作;显示第七界面,所述第七界面包括所述第一卡片集中的卡片的内容;接收作用于所述第七界面上的第十五用户操作;根据所述第十五用户操作获取所述第一卡片集中的第九卡片包括的第八内容;接收第十六用户操作;响应所述第十六用户操作,生成第十卡片,所述第十卡片为所述第一卡片集中的卡片,所述第十卡片包括所述第八内容;显示第八界面,所述第八界面包括所述第十卡片。
在上述方法中,用户可以在第一卡片集中选择第九卡片包括的第八内容,并基于选择的内容在第一卡片集中生成第十卡片,满足用户的个性化需求,并且无需用户手动在第十卡片中添加第八内容,减少用户操作,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:保存所述第一卡片集的信息,所述第一卡片集的信息包括以下至少一项:所述第一关键词、所述第一卡片集包括的卡片的数量、所述第一卡片集中的卡片包括的内容、所述第一卡片在所述第一卡片集中的显示位置、所述第一内容在所述第一卡片中显示、所述第一内容所属的所述第一网页的信息。
第二方面,本申请提供了又一种搜索方法,应用于电子设备,该方法包括:显示第一网页;获取与所述第一网页相关的第一信息,其中,所述第一信息为搜索所述第一网页使用的第一关键词,或者所述第一信息为根据用户作用于所述第一网页上的选择操作获取的信息;接收第一用户操作;响应所述第一用户操作,生成第一卡片集,所述第一卡片集包括第一卡片,所述第一卡片包括第一内容和第二内容,所述第一内容和所述第二内容均为所述第一网页的内容,所述第一内容和所述第二内容均与所述第一信息相关联;在所述第一用户操作之后,显示第一界面,所述第一界面包括所述第一卡片。
在上述方法中,电子设备可以获取用户搜索时输入的第一关键词(第一信息)或者用户在第一网页中自定义选择的第一信息,并自动从第一网页中筛选出和第一信息相关联的第一内容和第二内容,并通过第一卡片集中的第一卡片展示筛选出来的内容,让用户可以通过第一卡片集快速获取到第一网页中符合用户搜索意图的内容,无需浏览完整个第一网页,减少用户整理搜索结果时执行的用户操作和花费的时间,提升搜索效率。
在一种可能的实现方式中,所述显示第一网页,包括:获取用户输入的所述第一关键词;向网络设备发送第一搜索请求,所述第一搜索请求包括所述第一关键词;接收所述网络设备基于所述第一搜索请求发送的搜索结果集合;显示第二界面,所述第二界面包括所述搜索结果集合,所述搜索结果集合包括第一搜索结果和第二搜索结果,所述第一搜索结果和所述第一网页相关,所述第二搜索结果和第二网页相关;接收作用于所述第一搜索结果的第二用户操作;响应所述第二用户操作,显示所述第一网页。
在一种可能的实现方式中,所述第一内容和所述第二内容均与所述第一信息相关联,包括:所述第一内容和所述第一关键词的相似度大于或等于预设阈值,所述第二内容和所述第一关键词的相似度大于或等于预设阈值。
在一种可能的实现方式中,当所述第一信息为根据用户作用于所述第一网页上的选择操作获取的信息时,所述第一信息包括所述第一网页中的文本、图片、音频和视频中的至少一项。
在上述方法中,用户可以自定义选择第一网页中的第一信息,提升了搜索的灵活性。并且,无需用户在第一网页中人工查找和第一信息关联的第一内容和第二内容,以及将这些内容添加到记事本、便签等存储位置中,减少用户整理搜索结果时执行的用户操作和花费的时间,使用起来更加方便快捷。
在一种可能的实现方式中,所述第一信息包括所述第一网页中的文本、图片、音频和视频中的至少一项时,所述第一卡片集还包括第二卡片,所述第一卡片包括第一类型的内容,所述第二卡片包括第二类型的内容,所述第一类型和所述第二类型不同。
例如,第一卡片包括音频类型的内容,第二卡片包括视频类型的内容。
在上述方法中,不同类型的内容可以通过不同的卡片展示,无需用户手动将同一类型的内容整理到一起,浏览起来更加方便,也提升了用户获取所需内容的效率,例如,用户可能只想获取视频类型的内容,因此,用户可以只查看包括视频类型的内容的卡片就可以快速获取到所需内容。
在一种可能的实现方式中,当所述第一内容和所述第一信息的相似度大于所述第二内容和所述第一信息的相似度时,所述第一卡片中所述第一内容位于所述第二内容之前;或者,当所述第一网页中所述第一内容位于所述第二内容之前时,所述第一卡片中所述第一内容位于所述第二内容之前。
在上述方法中,和第一信息的相似度较大的内容可以显示在较前的位置,用户可以优先看到电子设备预测的更符合用户的搜索意图的内容,减少用户查找所需内容的时间,进一步提高搜索效率。或者,卡片中的内容的显示顺序可以和在网页中的显示顺序一致,统一网页和卡片集的显示效果,用户浏览体验更好。
在一种可能的实现方式中,所述显示第一界面之后,所述方法还包括:显示第三界面,所述第三界面包括第一控件,所述第三界面为所述电子设备的桌面,负一屏或第一应用的收藏界面,所述第一控件与所述第一卡片集相关联;接收作用于所述第一控件的第三用户操作;显示第四界面,所述第四界面包括所述第一卡片集中的第三卡片。
在一种可能的实现方式中,所述显示第一界面之后,所述方法还包括:显示图库的用户界面,所述图库的用户界面包括第一图片,所述第一图片和所述第一卡片集相关联;接收用于识别所述第一图片的用户操作;显示所述第四界面。
在上述方法中,用户可以通过桌面、负一屏、第一应用的收藏夹或者图库中的图片再次查看第一卡片集,获取方式多种多样,灵活性强,增加了用户再次查看第一卡片集的概率,功能可用性强。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第一界面中的所述第一内容的第四用户操作;显示所述第一网页。
例如,所述第四用户操作为单击操作或双击操作。
在上述方法中,用户可以操作第一卡片中的任意一个内容,以查看该内容所属的网页,无需用户手动查找该内容对应的搜索结果和点击该搜索结果,用户使用起来更加方便。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第一界面中的所述第一卡片的第五用户操作;响应所述第五用户操作,删除所述第一卡片;或者,接收作用于所述第一界面中的所述第一内容的第六用户操作;响应所述第六用户操作,删除所述第一内容。
例如,第五用户操作为向上或向下拖动的用户操作,或者,第五用户操作为作用于所述第一卡片对应的删除控件的点击操作。
例如,第六用户操作为作用于所述第一内容对应的删除控件的点击操作。
在上述方法中,用户可以删除第一卡片集中的任意一个卡片,用户也可以删除第一卡片中的任意一个内容,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:接收作用于所述第一界面中的所述第一内容的第七用户操作;响应所述第七用户操作,修改所述第一内容为第三内容;在所述第一卡片中显示所述第三内容。
在上述方法中,用户可以修改第一卡片中的任意一个内容,无需用户手动复制该内容到记事本等位置中再进行修改,减少用户操作,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片中的所述第一内容位于所述第二内容之前;所述方法还包括:接收作用于所述第一界面中的所述第一内容的第八用户操作;响应所述第八用户操作,调整所述第一内容和所述第二内容在所述第一卡片中的显示位置;显示所述调整后的所述第一卡片,所述调整后的所述第一卡片中的所述第一内容位于所述第二内容之后。
例如,所述第八用户操作为将第一内容拖动至所述第二内容所在位置的用户操作。
在上述方法中,用户可以调整第一卡片中的内容的显示顺序,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片集还包括第四卡片;所述方法还包括:接收作用于所述第一界面中的所述第一内容的第九用户操作;响应所述第九用户操作,将所述第一内容从所述第一卡片中移动至所述第四卡片中;响应所述第九用户操作,所述第一卡片不包括所述第一内容,所述第四卡片包括所述第一内容。
例如,所述第九用户操作为向左或向右拖动的用户操作。
在上述方法中,用户可以将第一卡片中的第一内容移动至第一卡片集中的其他卡片中,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述第一卡片集还包括第五卡片;所述方法还包括:接收作用于所述第一界面中的所述第一卡片的第十用户操作;响应所述第十用户操作,将所述第一卡片和所述第五卡片合并为第六卡片其中,所述第六卡片包括所述第一卡片中的内容和所述第五卡片中的内容。
例如,所述第十用户操作为将第一卡片拖动至第五卡片所在位置的用户操作。
在上述方法中,用户可以合并第一卡片集中的任意两个卡片,满足用户的个性化需求,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:接收第十一用户操作;显示第五界面,所述第五界面包括所述第一卡片集中的卡片的内容;接收作用于所述第五界面上的第十二用户操作;根据所述第十二用户操作获取所述第一卡片集中的第七卡片包括的第四内容;接收第十三用户操作;响应所述第十三用户操作,生成第八卡片,所述第八卡片为所述第一卡片集中的卡片,所述第八卡片包括所述第四内容;显示第六界面,所述第六界面包括所述第八卡片。
在上述方法中,用户可以在第一卡片集中选择第七卡片包括的第四内容,并基于选择的内容在第一卡片集中生成第八卡片,满足用户的个性化需求,并且无需用户手动在第八卡片中添加第四内容,减少用户操作,提升用户体验感。
在一种可能的实现方式中,所述方法还包括:保存所述第一卡片集的信息,所述第一卡 片集的信息包括以下至少一项:所述第一网页的信息、所述第一信息、所述第一卡片集包括的卡片的数量、所述第一卡片集中的卡片包括的内容、所述第一卡片在所述第一卡片集中的显示位置、所述第一内容在所述第一卡片中显示。
第三方面,本申请提供了一种电子设备,包括收发器、处理器和存储器,上述存储器用于存储计算机程序,上述处理器调用上述计算机程序,用于执行上述任一方面任意一种可能的实现方式中的搜索方法。
第四方面,本申请提供了一种电子设备,包括一个或多个处理器和一个或多个存储器。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行上述任一方面任一项可能的实现方式中的搜索方法。
第五方面,本申请提供了一种计算机存储介质,该计算机存储介质存储有计算机程序,该计算机程序被处理器执行时,实现执行上述任一方面任一项可能的实现方式中的搜索方法。
第六方面,本申请提供了一种计算机程序产品,当该计算机程序产品在电子设备上运行时,使得该电子设备执行上述任一方面任一项可能的实现方式中的搜索方法。
第七方面,本申请提供一种电子设备,该电子设备包括执行本申请任一种实现方式所介绍的方法或装置。上述电子设备例如为芯片。
附图说明
以下对本申请实施例用到的附图进行介绍。
图1A是本申请实施例提供的一种搜索***的架构示意图;
图1B是本申请实施例提供的一种搜索***的交互示意图;
图2A是本申请实施例提供的一种电子设备的硬件结构示意图;
图2B是本申请实施例提供的一种电子设备的软件架构示意图;
图3是本申请实施例提供的一种实施方式的用户界面的示意图;
图4是本申请实施例提供的又一种实施方式的用户界面的示意图;
图5A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图5B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图6是本申请实施例提供的又一种实施方式的用户界面的示意图;
图7A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图7B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图8是本申请实施例提供的又一种实施方式的用户界面的示意图;
图9是本申请实施例提供的又一种实施方式的用户界面的示意图;
图10是本申请实施例提供的又一种实施方式的用户界面的示意图;
图11是本申请实施例提供的又一种实施方式的用户界面的示意图;
图12是本申请实施例提供的又一种实施方式的用户界面的示意图;
图13是本申请实施例提供的又一种实施方式的用户界面的示意图;
图14是本申请实施例提供的又一种实施方式的用户界面的示意图;
图15是本申请实施例提供的又一种实施方式的用户界面的示意图;
图16是本申请实施例提供的又一种实施方式的用户界面的示意图;
图17A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图17B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图18是本申请实施例提供的又一种实施方式的用户界面的示意图;
图19A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图19B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图20是本申请实施例提供的又一种实施方式的用户界面的示意图;
图21是本申请实施例提供的又一种实施方式的用户界面的示意图;
图22是本申请实施例提供的又一种实施方式的用户界面的示意图;
图23是本申请实施例提供的又一种实施方式的用户界面的示意图;
图24是本申请实施例提供的又一种实施方式的用户界面的示意图;
图25A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图25B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图25C是本申请实施例提供的又一种实施方式的用户界面的示意图;
图26是本申请实施例提供的又一种实施方式的用户界面的示意图;
图27A是本申请实施例提供的又一种实施方式的用户界面的示意图;
图27B是本申请实施例提供的又一种实施方式的用户界面的示意图;
图28是本申请实施例提供的一种搜索方法的流程示意图;
图29是本申请实施例提供的一种第一内容集合的获取过程的示意图;
图30是本申请实施例提供的又一种搜索方法的流程示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
用户使用终端的搜索功能(例如搜索引擎提供的搜索功能)时,希望得到和搜索的关键词(简称搜索关键词)相关的内容,该内容可能包括一个网页中的一段文本、一张图片、一个视频或其他类型的内容。然而,搜索功能返回的是包括多个网页的概要信息(可称为网页卡片)的网页列表,其中,可以在网页卡片中输出网页关键内容的动态效果。为了获取符合搜索意图的网页内容,用户往往需要点击网页列表包括的多个网页卡片以查看对应网页的详细内容,一个网页包括的内容通常很多,而终端(尤其是体积较小的手机等终端)一次展示的内容有限,虽然网页内容中的关键词可以突出显示(如高亮显示),但用户仍然需要浏览完整个网页以查看是否存在符合搜索意图的文本、图片、视频等内容。也就是说,用户需要执行多次操作、花费大量时间去寻找符合搜索意图的内容,搜索效率很低,用户体验感差。
本申请提供了一种搜索方法及电子设备。电子设备接收到用户输入的关键词(可简称为搜索关键词)后,可以基于该关键词得到至少一个网页,并基于这至少一个网页生成至少一 张卡片,这至少一张卡片可以包括这至少一个网页中和搜索关键词相关的内容。这至少一张卡片可以提供给用户查看、修改和保存。可以理解为是,电子设备自动对这至少一个网页的内容进行比较和整理,将其中和搜索关键词相关的多个内容以至少一张卡片(简称卡片集)的形式整合到一起,让用户通过卡片集快速获取符合搜索意图的内容/所需内容,减少用户整理搜索结果的时间,提高搜索效率,提升用户体验感。
其中,该方法可以应用于网页搜索领域。网页搜索可以是基于用户输入的搜索关键词,使用特定的算法和策略从互联网搜索出和搜索关键词相关的搜索结果(和网页相关),然后对这些搜索结果进行排序和显示搜索结果列表(也可称为搜索结果集合,也可称为网页列表),搜索结果列表可以包括搜索结果的概要信息(可称为网页卡片)。用户可以操作任意一个网页卡片来查看对应网页的内容页(用于展示该网页的详细内容)。不限于此,该方法还可以应用于其他搜索领域,即上述搜索结果可以不和网页相关,例如和商品相关,本申请对此不作限定。为了方便描述,本申请以搜索结果和网页相关(后续可简称为搜索结果为网页)为例进行说明。
本申请中的触摸操作可以但不限于包括:单击、双击、长按、单指长按、多指长按、单指滑动、多指滑动、指关节滑动等多种形式。其中,滑动形式的触摸操作可以简称为滑动操作,滑动操作例如但不限于为左右滑动、上下滑动、往第一特定位置滑动等,本申请对滑动操作的轨迹不作限定。在一些实施方式中,该触摸操作可以作用于电子设备上的第二特定位置。上述特定位置可以位于电子设备的显示屏上,例如图标等控件所在的位置或显示屏的边缘等,或者,该特定位置也可以位于电子设备的侧边、背面等其他位置,例如音量键、电源键等按键的位置。其中,上述特定位置为电子设备预设的,或者,该特定位置为电子设备响应于用户操作确定的。
本申请中的拖动操作属于触摸操作,作用于某一控件的拖动操作可以是保持触摸该控件并进行滑动的操作,例如但不限于包括单指拖动、多指拖动、指关节拖动等多种形式。和滑动操作类似,拖动操作例如但不限于为左右拖动、上下拖动、往特定位置拖动等,本申请对拖动操作的轨迹不作限定。
本申请中的截图操作可以用于选中电子设备的显示屏上的任一区域中的显示内容,并将该显示内容以图片形式保存到电子设备中。该显示内容可以包括至少一种类型的内容,例如文本内容、图片内容和视频内容。截图操作可以但不限于为触摸操作、语音输入、运动姿态(例如手势)、脑电波等,例如,该截图操作为指关节滑动,或者,该截图操作为同时作用于电子设备的电源键和音量键的触摸操作。
本申请中的卡片集可以包括至少一个卡片。本申请中的卡片可以是以卡片的形式显示的控件,在具体实现中,还可以是以悬浮框等其他形式显示,本申请对卡片的具体显示形式不作限定。一方面,卡片可以用于显示网页内容,以提供给用户浏览,另一方面,电子设备可以接收针对该卡片的用户操作,以实现查看网页内容所属的原始网页、移动、编辑、删除和添加等多种操作。
本申请中的网页卡片可以是以卡片的形式显示搜索结果(网页)的概要信息的控件,在具体实现中,还可以是以文本框等其他形式显示,本申请对此不作限定。网页的概要信息例如但不限于为网页中显示在最上面的网页内容,网页中包括搜索关键词的内容等。
下面介绍本申请实施例涉及的一种搜索***10。
图1A示例性示出了一种搜索***10的架构示意图。
如图1A所示,搜索***10可以包括电子设备100和网络设备200。电子设备100可以 通过有线(例如,通用串行总线(universal serial bus,USB)、双绞线、同轴电缆和光纤等)和/或无线(例如,无线局域网(wireless local area networks,WLAN)、蓝牙和蜂窝通信网络等)的方式和网络设备200进行通信。
其中,电子设备100可以是手机、平板电脑、手持计算机、桌面型计算机、膝上型计算机、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、蜂窝电话、个人数字助理(personal digital assistant,PDA),以及智能电视、投影仪等智能家居设备,智能手环、智能手表、智能眼镜等可穿戴设备,增强现实(augmented reality,AR)、虚拟现实(virtual reality,VR)、混合现实(mixed reality,MR)等扩展现实(extended reality,XR)设备或车载设备,本申请实施例对电子设备的具体类型不作特殊限制。在一种实施方式中,电子设备100可以支持多种应用程序,例如但不限于包括:摄像、图像管理、图像处理、文字处理、电话、电子邮件、即时消息、网络通讯、媒体播放、定位和时间管理等应用程序。
其中,网络设备200可以包括至少一个服务器,在一种实施方式中,任意一个服务器可以为硬件服务器,在一种实施方式中,任意一个服务器可以为云服务器。在一种实施方式中,网络设备200可以是Linux服务器、Windows服务器或其他可以提供多设备同时接入的服务器设备。在一种实施方式中,网络设备200可以是多地域、多机房、多服务器所组成的服务器集群。在一种实施方式中,网络设备200可以支持消息存储及分发功能、多用户接入管理功能、大规模数据存储功能、大规模数据处理功能、数据冗余备份功能等。
在一种实施方式中,电子设备100可以与网络设备200基于浏览器/服务器(browser/server,B/S)架构通信,也可以基于客户端/服务器(client/server,C/S)架构通信。电子设备100可以接收用户输入的关键词,向网络设备200请求获取和该关键词相关的搜索结果。电子设备100可以显示从网络设备200处获取到的搜索结果,例如显示网页列表。
图1B示例性示出了一种搜索***10的交互示意图。
如图1B所示,搜索***10中的电子设备100可以包括输入模块101、处理模块102、通信模块103、输出模块104、存储模块105和快捷入口106。搜索***10中的网络设备200可以包括通信模块201和处理模块202,其中:
输入模块101用于接收用户输入的指令,例如,用户输入的搜索关键词,用户输入的卡片集获取请求,或者用户在网页内容页中选中的内容(简称选中内容)。处理模块102用于电子设备100进行判断、分析、运算等动作,并向其他模块发送指令,以协同各个模块有序执行相应程序,例如执行下图28和图30所示的方法。通信模块103用于电子设备100和网络设备200(通过通信模块201实现)之间进行信息传输。输出模块104用于向用户输出信息,例如通过显示屏向用户显示搜索结果或卡片集。
在一种实施方式中,输入模块101接收到用户输入的搜索关键词后,可以向处理模块102中的搜索模块发送该搜索关键词,搜索模块可以通过通信模块103向网络设备的通信模块201发送该搜索关键词,通信模块201可以向网络设备的处理模块202发送包括搜索关键词的搜索请求。处理模块202可以基于该搜索请求获取搜索结果(即和搜索关键词相关的多个网页),并通过通信模块201向通信模块103发送该搜索结果,通信模块103可以向处理模块102中的搜索模块发送该搜索结果,搜索模块可以向输出模块104发送该搜索结果。输出模块104可以输出该搜索结果,例如显示网页列表。
在一种实施方式中,输入模块101接收到用户输入的卡片集获取请求后,可以向处理模块102中的提取模块发送该卡片集获取请求。提取模块接收到该卡片集获取请求后,可以从 搜索模块处获取上述搜索结果,并从搜索结果中提取出至少一个网页的内容,例如,从搜索得到的多个网页中提取出排列在前N位的网页的内容(N为正整数),或者,提取出用户从搜索得到的多个网页中选择的网页的内容。然后,提取模块可以向处理模块102中的语义匹配模块发送这至少一个网页的内容,语义匹配模块可以从这至少一个网页的内容中筛选出和搜索关键词相关的内容。语义匹配模块可以向处理模块102中的卡片生成模块发送上述和搜索关键词相关的内容,卡片生成模块可以基于这些内容生成卡片集,例如,卡片集中的不同卡片包括不同类型的内容,具体存在:包括文本的卡片、包括静态图片的卡片、包括动态图片的卡片、包括视频的卡片、包括音频的卡片等。卡片生成模块可以将生成的卡片集发送给输出模块104,输出模块104可以输出该卡片集,例如显示卡片集。
在一种实施方式中,输入模块101接收到用户在网页内容页中选中的内容后,可以向处理模块102中的关联内容获取模块发送该选中内容。关联内容获取模块可以从该网页内容页中获取和该选中内容相关的内容(可称为选中内容对应的关联内容),并将选中内容和关联内容发送给卡片生成模块。卡片生成模块可以基于选中内容和关联内容生成至少一张卡片,例如,该卡片包括选中内容和对应的关联内容。这至少一张卡片可以属于上述卡片集,例如,输出模块104可以在卡片集中增加显示这至少一张卡片。
处理模块102生成的卡片集可以发送至存储模块105中存储。在一种实施方式中,存储模块105存储的卡片集信息可以包括但不限于以下至少一项:卡片集的标识信息,搜索关键词,卡片集包括的卡片数量,卡片在卡片集中的显示位置,卡片名称,卡片包括的文本、图片、视频等网页内容,网页内容在卡片中的显示位置,以及卡片集中的网页内容所属的网页的地址信息等。
快捷入口106可以用于显示二次加载卡片集的入口,例如,输出模块104输出卡片集之后,快捷入口106可以在搜索应用的收藏夹、电子设备100的桌面、负一屏应用的用户界面、图库应用的用户界面等位置显示二次加载卡片集的入口。其中,搜索应用可以提供搜索功能(接收搜索关键词,并提供和搜索关键词相关的搜索结果),以及显示卡片集的功能。在一种实施方式中,快捷入口106可以从存储模块105处获取卡片集包括的部分或全部信息,并基于获取的信息显示二次加载卡片集的入口,例如,在二次加载卡片集的入口上显示搜索关键词。在一种实施方式中,快捷入口106接收到用于加载入口1对应的卡片集1的指令后,可以向存储模块105中发送卡片集1的标识信息,存储模块105接收到卡片集1的标识信息后,可以获取该标识信息对应的卡片集1的全部信息,并发送给输出模块104,输出模块104可以根据接收到的卡片集1的信息输出卡片集1,例如显示卡片集1。
在一种实施方式中,上述输入模块101、处理模块102、输出模块104和存储模块105中至少一个模块可以属于电子设备100的搜索应用,其中,搜索应用可以提供搜索功能,例如,用户可以打开电子设备100中的搜索应用,并基于搜索应用的用户界面输入搜索关键词,电子设备100可以通过搜索应用的用户界面显示和搜索关键词相关的搜索结果。
在一种实施方式中,存储模块105可以是电子设备100的存储器,例如下图2A所示的内部存储器121。
在一种实施方式中,快捷入口106可以是应用程序提供的微件等控件,例如负一屏应用提供的卡片形式的微件。
在一种实施方式中,输出模块104可以是应用程序提供的显示组件,例如,提供输出模块104的应用程序和提供快捷入口106的应用程序为同一个应用程序,快捷入口106接收到用于加载卡片集的指令后,可以通过属于同一个应用程序的输出模块104输出该卡片集。
接下来介绍本申请实施例提供的示例性的电子设备100。
图2A示例性示出了一种电子设备100的硬件结构示意图。
应理解的是,图2A所示电子设备100仅是一个范例,并且电子设备100可以具有比图2A中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图2A中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图2A所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一种实施方式中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一种实施方式中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一种实施方式中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸 传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一种实施方式中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一种实施方式中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一种实施方式中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一种实施方式中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一种实施方式中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一种实施方式中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一种实施方式中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一种实施方式中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一种实施方式中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模 块160,调制解调处理器以及基带处理器等实现。在一种实施方式中,电子设备100可以通过无线通信功能和网络设备200进行通信,例如向网络设备200发送搜索请求。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一种实施方式中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一种实施方式中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一种实施方式中,调制解调处理器可以是独立的器件。在另一种实施方式中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一种实施方式中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。在一种实施方式中,电子设备100可以通过显示功能显示语义汇聚卡片集(包括符合搜索意图的网页内容),以及二次加载语义汇聚卡片集的入口控件。
GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一种实施方式中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度等进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一种实施方式中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一种实施方式中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机 接口170D,以及应用处理器等实现音频功能,例如播放语义汇聚卡片集包括的音频内容等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一种实施方式中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一种实施方式中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一种实施方式中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一种实施方式中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一种实施方式中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一种实施方式中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一种实施方式中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一种实施方式中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖 屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一种实施方式中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一种实施方式中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一种实施方式中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一种实施方式中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一种实施方式中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一种实施方式中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息, 未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一种实施方式中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。例如,分层架构的软件***可以是安卓(Android)***,也可以是鸿蒙(harmony)操作***(operating system,OS),或其它软件***。本申请实施例以分层架构的Android***为例,示例性说明电子设备100的软件结构。
图2B示例性示出一种电子设备100的软件架构示意图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一种实施方式中,将Android***分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图2B所示,应用程序包可以包括相机,日历,音乐,导航,短信息,搜索,图库,负一屏和浏览器等应用程序。其中,搜索应用可以提供搜索功能。搜索可以为独立的应用程序,也可以是浏览器等其他应用程序封装的功能组件,本申请对此不作限定。本申请中,应用程序包也可以替换为小程序等其他形式的软件。以下实施例以浏览器应用集成了搜索的功能组件为例进行说明。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后 台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合网页搜索场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为浏览器应用的搜索控件为例,浏览器调用应用框架层的接口,基于用户在浏览器的搜索框控件中输入的关键词进行搜索,进而通过调用内核层启动显示驱动,通过显示屏194显示搜索得到的网页列表。用户可以操作网页列表中的任意一个网页卡片来查看对应的网页的详细内容,以获取符合搜索意图的内容。
下面介绍本申请实施例涉及的应用场景以及该场景下的用户界面示意图。
图3示例性示出一种网页搜索过程的用户界面示意图。
如图3的(A)所示,电子设备100可以显示浏览器的用户界面310。用户界面310可以包括搜索栏311和搜索控件312,其中,搜索栏311用于接收用户输入的关键词,搜索控件312用于触发针对搜索栏311中的关键词的搜索操作。电子设备100可以接收用户在搜索栏311中输入的关键词“西安旅游路线”,然后接收针对搜索控件312的触摸操作(例如点击操作),响应于该触摸操作,搜索得到和上述关键词相关的至少一个网页,并对这至少一个网页进行排序。电子设备100可以显示排序后的网页,具体可参见图3的(B)所示的用户界面320。
如图3的(B)所示,用户界面320可以包括搜索栏321和网页列表。搜索栏321中显示有当前搜索的关键词“西安旅游路线”。用户界面320示出了网页列表中排列在前3位的网页的概要信息(可称为网页卡片),按照从上往下的顺序依次为网页卡片322、网页卡片323和网页卡片324,其中:网页卡片322用于指示标题为“西安旅游路线图”、网址为“网址111”和来源为“来源aaa”的网页1。网页卡片323用于指示标题为“西安旅游路线-攻略”、网址为“网址222”和来源为“来源bbb”的网页2。网页卡片324用于指示标题为“西安旅游全 攻略”、网址为“网址3333”和来源为“来源ccc”的网页3。用户界面320中位于底部的导航栏包括卡片控件325,卡片控件325用于触发显示语义汇聚卡片集,该语义汇聚卡片集例如包括上述网页1、网页2和网页3中和搜索关键词“西安旅游路线”相关的内容,不包括上述网页1、网页2和网页3中和搜索关键词“西安旅游路线”无关的内容。不限于通过图3的(B)所示的用户界面320中的卡片控件325触发显示语义汇聚卡片集,在另一些示例中,还可以通过用户输入的手势或语音来触发显示语义汇聚卡片集,本申请对触发显示语义汇聚卡片集的方式不作限定。
以下示例的触摸操作例如但不限于为:单击、双击、长按、单指长按、多指长按、单指滑动(包括单指拖动)、多指滑动(包括多指拖动)或指关节滑动等。
在一种实施方式中,在上图3所示的操作之后,电子设备100可以接收针对图3的(B)所示的用户界面320中的卡片控件325的触摸操作(例如点击操作),响应于该触摸操作,显示语义汇聚卡片集,具体示例可参见图4所示的用户界面。
可以理解地,一个网页可以包括多种类型的内容,例如文本、图片、视频、音频等类型的内容。在一种实施方式中,语义汇聚卡片集中的任意一张卡片可以仅包括一种类型的内容,例如一张卡片包括静态图片,另一张卡片包括动态图片。在另一种实施方式中,语义汇聚卡片集中的任意一张卡片可以包括多种不同类型的内容,例如,一张卡片包括文本和图片。
图4以语义汇聚卡片集包括三张卡片为例进行说明,这三张卡片分别包括文本、图片和视频类型的网页内容(可分别简称为文本卡片、图片卡片和视频卡片)。在具体实现中,语义汇聚卡片集可以包括更多或更少的卡片,例如,语义汇聚卡片集仅包括文本卡片,或者,语义汇聚卡片集仅包括图片卡片和视频卡片,本申请对此不作限定。
在一种实施方式中,语义汇聚卡片集包括的卡片是用户设置的,例如下图5B所示示例。在另一种实施方式中,语义汇聚卡片集包括的卡片是电子设备根据网页内容生成的,例如,上述网页1、网页2和网页3中不存在和搜索关键词“西安旅游路线”相关的文本内容,因此,语义汇聚卡片集仅包括图片卡片和视频卡片。
如图4的(A)所示,电子设备100可以显示浏览器的用户界面410。用户界面410用于显示语义汇聚卡片集中的文本卡片(用于显示文本类型的网页内容)。用户界面410可以包括标题411、文本卡片412、页面选项413、保存控件414和新增控件415。标题411用于显示当前搜索的关键词“西安旅游路线”。文本卡片412包括标题(“文本卡片”)和多个文本内容。按照和关键词“西安旅游路线”的相似度的从大到小,文本卡片412中显示的多个文本内容从上往下排列依次为:文本内容4121(“热门的旅游路线有…”)、文本内容4122(“旅游路线:西仓—回民街…”)和文本内容4123(“景点路线一:陕西历史博物馆—钟鼓楼...”),文本卡片412中的来源信息4121A用于指示文本内容4121是网址为“网址111”和来源为“来源aaa”的网页1包括的文本,文本卡片412中的来源信息4122A用于指示文本内容4122是网址为“网址333”和来源为“来源ccc”的网页3包括的文本,文本卡片412中的来源信息4123A用于指示文本内容4123是网址为“网址222”和来源为“来源bbb”的网页2包括的文本。页面选项413包括三个选项(表现形式为三个圆形),其中第一个选项为选中状态(表现形式为黑色的圆形),另外两个选项为未选中状态(表现形式为白色的圆形),可以表征语义汇聚卡片集包括三张卡片,以及文本卡片412是语义汇聚卡片集中的第一张卡片。保存控件414用于触发保存语义汇聚卡片集。新增控件415用于触发在语义汇聚卡片集中新建卡片。
电子设备100可以接收针对文本卡片412的滑动操作,例如,该滑动操作可以为上下滑动或者左右滑动等,图4以该滑动操作为从右向左滑动为例进行示意。电子设备100响应于 该滑动操作,显示图4的(B)所示的用户界面420。用户界面420和图4的(A)所示的用户界面410类似,区别在于,用户界面420用于显示语义汇聚卡片集中的图片卡片(用于显示图片类型的网页内容)。用户界面420中的图片卡片421包括标题(“图片卡片”)和多个图片内容,多个图片内容例如包括图片内容4211和图片内容4212,用户界面420中的来源信息4211A用于指示图片内容4211是网址为“网址111”和来源为“来源aaa”的网页1包括的图片,用户界面420中的来源信息4212A用于指示图片内容4212是网址为“网址222”和来源为“来源bbb”的网页2包括的图片。用户界面420中的页面选项413包括的第二个选项为选中状态,其他两个选项为未选中状态,可以表征图片卡片421是语义汇聚卡片集中的第二张卡片。
电子设备100可以接收针对图片卡片421的滑动操作,例如,该滑动操作可以为上下滑动或者左右滑动等,图4以该滑动操作为从右向左滑动为例进行示意。电子设备100响应于该滑动操作,显示图4的(C)所示的用户界面430。用户界面430和图4的(A)所示的用户界面410类似,区别在于,用户界面430用于显示语义汇聚卡片集中的视频卡片(用于显示视频类型的网页内容)。用户界面430中的视频卡片431包括标题(“视频卡片”)和多个视频内容,多个视频内容例如包括视频内容4311和视频内容4312,用户界面430中的来源信息4311A用于指示视频内容4311是网址为“网址222”和来源为“来源bbb”的网页2包括的视频,用户界面430中的来源信息4312A用于指示视频内容4312是网址为“网址111”和来源为“来源aaa”的网页1包括的视频。用户界面430中的页面选项432包括的第三个选项为选中状态,其他两个选项为未选中状态,可以表征视频卡片431是语义汇聚卡片集中的第三张卡片。
在一种实施方式中,语义汇聚卡片集可以包括搜索结果的全部内容或部分内容,例如,语义汇聚卡片集中的图片卡片包括搜索结果中的全部图片或者部分图片。例如,假设搜索结果为图3所述的网页1、网页2和网页3,图4的(A)所示的用户界面410中的文本卡片412可以包括这三个网页中的文本内容,图4的(B)所示的用户界面420中的图片卡片421仅包括网页1和网页2中的图片内容,不包括网页3中的图片内容,图4的(C)所示的用户界面430中的视频卡片431仅包括网页1和网页2中的视频内容,不包括网页3中的视频内容。在一些示例中,网页3中的图片内容和视频内容和搜索关键词的相关性较低,因此,图片卡片421不包括网页3中的图片内容,视频卡片431不包括网页3中的视频内容。
不限于上图4所示的语义汇聚卡片集,在另一些示例中,也可以不显示语义汇聚卡片集中的网页内容所属的网页的信息,例如,图4的(A)所示的用户界面410中的文本卡片412不包括:指示文本内容4121所属的网页1的来源信息4121A,本申请对语义汇聚卡片集的具体显示方式不作限定。
不限于上述示例的情况,在另一些示例中,语义汇聚卡片集包括的内容可以是从更多或更少的网页中提取出来的内容。电子设备100可以从搜索得到至少一个网页中提取出排列在前N位的网页,并将这N个网页中和搜索关键词相关的内容作为语义汇聚卡片集中的内容,其中,N为正整数。
不限于上述示例的情况,在另一种实施方式中,语义汇聚卡片集包括的内容可以是从用户选择的至少一个网页中提取出来的、和搜索关键词相关的内容。例如,在上图3所示的操作之后,电子设备100可以接收针对图3的(B)所示的用户界面320中的卡片控件325的触摸操作(例如点击操作),响应于该触摸操作,显示用于选择网页的用户界面,具体示例可参见图5A的(A)所示的用户界面510。
如图5A的(A)所示,用户界面510和图3的(B)所示的用户界面320类似,用户界面510中的多个网页卡片例如包括指示网页1的网页卡片511、指示网页2的网页卡片512和指示网页3的网页卡片513。任意一个网页卡片中还显示有选择控件,任意一个选择控件用于选择对应的网页(即所在的网页卡片指示的网页)或取消该选择。在用户界面510中,选择控件511A和选择控件513A为选中状态,选择控件512A为未选中状态,可以表征选择控件511A对应的网页1和选择控件513A对应的网页3已被选中。用户界面510中位于底部的功能栏包括全选控件514、卡片控件515和退出控件516。全选控件514用于选择用户界面510显示的全部网页卡片指示的网页。卡片控件515用于触发显示语义汇聚卡片集,其中,该语义汇聚卡片集包括的内容是从上述用户选择的网页1和网页3中提取出来的,而用户界面320中的卡片控件325触发显示的语义汇聚卡片集包括的内容是从搜索得到的至少一个网页(上述以排列在前三位的网页1、网页2和网页3为例)中提取出来的。退出控件516用于退出使用选择网页的功能。
电子设备100可以接收针对卡片控件515的触摸操作(例如点击操作),响应于该触摸操作,从已选择的网页1和网页3中提取出和搜索关键词相关的内容,并作为语义汇聚卡片集的内容。电子设备100可以显示语义汇聚卡片集,例如显示图5A的(B)所示的用户界面520。用户界面520和图4的(A)所示的用户界面410类似,区别在于,文本卡片包括的文本内容不同。用户界面520中的文本卡片521包括用户界面410中的文本内容4121、来源信息4121A(指示文本内容4121是网页1包括的文本)、文本内容4122和了来源信息4122A(指示文本内容4122是网页3包括的文本),不包括用户未选择的网页2包括的文本(如用户界面410中的文本内容4123)。
在一种实施方式中,语义汇聚卡片集包括的卡片类型、卡片数量和/或卡片包括的内容的数量可以是电子设备预设的。在另一种实施方式中,语义汇聚卡片集包括的卡片类型、卡片数量和/或卡片包括的内容的数量可以是电子设备响应于用户操作确定的。在一些示例中,在上图3所示的操作之后,电子设备100可以接收针对图3的(B)所示的用户界面320中的卡片控件325的触摸操作(例如点击操作),响应于该触摸操作,显示图5B所示的用户界面530。在另一些示例中,用户也可以通过卡片设置进入图5B所示的用户界面530。
如图5B所示,用户界面530包括设置界面531,设置界面531包括标题(“卡片个性化设置”)、设置说明5311(“可设置所需卡片的类型和卡片包括的内容数量”)和多个卡片选项。多个卡片选项例如包括文本卡片的选项5312、图片卡片的选项5313、视频卡片的选项5314、音频卡片的选项5315和文档卡片的选项5316等,其中,音频卡片为包括音频类型的网页内容的卡片,文档卡片为包括文档类型的网页内容(例如word、excel和ppt格式的文档)的卡片。任意一个选项中显示有选择控件,任意一个选择控件用于选择将对应的卡片添加到语义汇聚卡片集中或者取消该选择,以选项5312为例进行说明,其他选项类似。选项5312中显示的选择控件5312A用于设置在语义汇聚卡片集中添加对应的文本卡片或者取消该设置。当选择控件5312A为选中状态时,选项5312中可以还显示内容选项5312B,内容选项5312B包括字符“最多包括…个文本内容”,其中“…”处显示有设置控件5312C,设置控件5312C中位于中间的框用于显示当前设置的文本卡片最多包括的文本内容的数量,设置控件5312C中位于左侧的框用于增加文本卡片最多包括的文本内容的数量,设置控件5312C中位于右侧的框用于减少文本卡片最多包括的文本内容的数量。在用户界面530中,选项5312、选项5313和选项5314中显示的选择控件均为选中状态,选项5315和选项5316中显示的选择控件均为未选中状态,可以表征当前设置在语义汇聚卡片集中添加选项5312对应的文本卡片、选项 5313对应的图片卡片和选项5314对应的视频卡片。并且,选项5312中的设置控件5312C显示有数字3,可以表征文本卡片包括的内容数量的最大值设置为3,选项5313中的设置控件显示有数字2,可以表征图片卡片包括的内容数量的最大值设置为2,选项5314中的设置控件显示有数字2,可以表征视频卡片包括的内容数量的最大值设置为2。用户界面530还包括确定控件5317和取消控件5318,取消控件5318用于取消显示语义汇聚卡片集。电子设备100可以接收针对确定控件5317的触摸操作(例如点击操作),响应于该触摸操作,显示上述设置的语义汇聚卡片集,具体可参见图4所示的语义汇聚卡片集。
不限于上述示例的情况,在另一些示例中,还可以在电子设备100的***设置界面、浏览器应用的设置界面等设置界面显示图5B所示的用户界面530。
不限于上述示例的情况,在另一些示例中,还可以使用图3所述的语音、手势等其他触发显示语义汇聚卡片集的方式来触发显示图4、图5A和图5B所示的用户界面。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对语义汇聚卡片集中的任一内容的触摸操作(例如双击操作),响应于该触摸操作,显示该内容所属的网页的详细内容,具体示例可参见图6。
如图6的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410,用户界面410中的来源信息4121A用于指示文本内容4121是网址为“网址111”和来源为“来源aaa”的网页1包括的文本。电子设备100可以接收针对文本内容4121的触摸操作(例如双击操作),响应于该触摸操作,显示文本内容4121所属的网页1的详细内容,具体示例可参见图6的(B)所示的用户界面600。用户界面600可以包括搜索栏601,搜索栏601中显示有当前展示的网页1的网址和来源“网址111(来源aaa)”。用户界面600还包括网页1的标题602(“西安旅游路线图”)和文本、图片等多种类型的内容,上述触摸操作针对的文本内容4121在用户界面600中突出显示(例如高亮显示)。用户可以通过用户界面600获取到和上述文本内容4121相关的信息,例如在网页1中的显示位置和上下文。
在一种实施方式中,电子设备100响应于上述针对语义汇聚卡片集中的任一内容的触摸操作,显示与该内容相关的网页内容,可选地,该网页内容可以定位到该网页内容所属的网页中的位置显示,例如,电子设备100可以响应于作用于图25B所示的用户界面2530中的关联内容2512B的触摸操作,显示图25C所示的用户界面2540,关联内容2512B位于用户界面2540所示的关联内容2512B所属的网页1的上面。可选地,该网页内容可以在该网页内容所属的网页中突出显示。例如,电子设备100可以响应于针对图4的(A)所示的用户界面410中的文本内容4121的触摸操作,显示图6的(B)所示的用户界面600,文本内容4121在用户界面600所示的文本内容4121所属的网页1中突出显示。例如,电子设备100可以响应于作用于图25B所示的用户界面2530中的关联内容2512B的触摸操作,显示图25C所示的用户界面2540,关联内容2512B在用户界面2540所示的关联内容2512B所属的网页1中突出显示。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对语义汇聚卡片集中的任一卡片的触摸操作(例如向上或向下的拖动操作,或者点击操作),响应于该触摸操作,在语义汇聚卡片集中删除该卡片,具体示例可参见图7A和图7B。
在如图7A的(A)所示的示例中,电子设备100可以显示图4的(A)所示的用户界面410,用户界面410用于显示语义汇聚卡片集中的文本卡片412。电子设备100可以接收针对文本卡片412的拖动操作,例如,该拖动操作可以为上下拖动或者左右滑动等,图7A以该 拖动操作为从下往上拖动为例进行示意。电子设备100响应于该拖动操作,在语义汇聚卡片集中删除文本卡片412,此时可以显示语义汇聚卡片集中的其他卡片,例如显示图7A的(B)所示的用户界面700,用户界面700用于显示语义汇聚卡片集中的图片卡片421。和用户界面410中的页面选项413包括三个选项不同,用户界面700中的页面选项701仅包括两个选项,且第一个选项为选中状态,可以表征语义汇聚卡片集包括两张卡片,以及图片卡片421是这两张卡片中的第一张卡片。
如图7B的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410,用户界面410用于显示语义汇聚卡片集中的文本卡片412,当卡片为可编辑状态时,例如用户选择编辑卡片的控件,或通过其它方式使得卡片为可编辑状态,文本卡片412中显示有删除控件4124。电子设备100可以接收针对删除控件4124的触摸操作(例如点击操作),响应于该触摸操作,在语义汇聚卡片集中删除文本卡片412,此时可以显示语义汇聚卡片集中的其他卡片,例如显示图7B的(B)所示的用户界面700,具体说明和图7A的(B)所示的用户界面700类似,图7B的(B)所示的用户界面700显示的图片卡片421中也可以显示有删除控件,用于触发删除图片卡片421。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对语义汇聚卡片集中的任一卡片的触摸操作(例如长按操作),响应于该触摸操作,显示用于编辑该卡片的用户界面(简称编辑界面),具体示例可参见图8。
如图8的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410。电子设备100可以接收针对用户界面410中的文本卡片412的长按操作,例如,该长按操作为单指长按或者双指长按等,响应于该长按操作,显示图8的(B)所示的用户界面800。用户界面800用于实现文本卡片412的编辑功能,用户界面800所示的文本卡片412包括的任意一个文本内容中显示有删除控件,以文本内容4121为例进行说明,其他文本内容类似,文本内容4121中显示有删除控件810,用于在文本卡片412中删除文本内容4121。用户界面800还包括确定控件820和取消控件830,其中,确定控件820用于保存当前的编辑操作(即保存用户界面800中显示的文本卡片412),取消控件830用于取消当前的编辑操作(即保存未编辑之前的文本卡片412),上述编辑操作的示例可参见下图9-图12所示的编辑操作。
在一种实施方式中,在上图8所示的操作之后,电子设备100可以接收针对图8的(B)所示的用户界面800中的文本内容4121的触摸操作(例如点击操作),响应于该触摸操作,显示文本内容4121的编辑界面,具体示例可参见图9的(A)所示的用户界面910。
如图9的(A)所示,用户界面910用于实现文本卡片412中的文本内容4121的编辑功能。文本内容4121包括的字符“热门的旅游路线有…”前显示有光标911,光标911用于指示文本编辑的***点。电子设备100可以接收用户输入的字符,并在光标911前显示该字符,例如,电子设备100可以接收用户输入的字符“方案一:”,并显示图9的(B)所示的用户界面920。用户界面920所示的文本卡片412中的文本内容4121包括字符“方案一:热门的旅游路线有…”,其中,字符“方案一:”和字符“热门的旅游路线有…”之间显示有光标911。
不限于上述示例的情况,在另一些示例中,电子设备100显示图9的(A)所示的用户界面910时,还可以响应于用户操作(例如语音输入的“删除”),删除文本内容4121包括的字符,本申请对卡片集中的任一内容的具体编辑方式不作限定。
在一种实施方式中,在上图8所示的操作之后,电子设备100可以接收针对图10的(A)所示的用户界面800(即图8的(B)所示的用户界面800)中的删除控件810的触摸操作(例如点击操作),响应于该触摸操作,删除用户界面800中的文本内容4121。此时,电子设备 100可以显示图10的(B)所示的用户界面1000,用户界面1000中的文本卡片412包括文本内容4122和文本内容4123,不包括文本内容4121。
在一种实施方式中,在上图8所示的操作之后,电子设备100可以接收针对图8的(B)所示的用户界面800中的任一文本内容的触摸操作(例如向下或向上的拖动操作),响应于该触摸操作,调整该文本内容在文本卡片中的显示位置(也可理解为是调整文本卡片中的文本内容的排列顺序),具体示例可参见图11。
如图11的(A)所示,电子设备100可以显示图8的(B)所示的用户界面800。电子设备100可以接收作用于用户界面800所示的文本卡片412中的文本内容4122的拖动操作,例如该拖动操作为上下拖动或左右拖动等,图11以该拖动操作为将用户界面800所示的文本卡片412中的文本内容4122向上拖动至文本内容4121所在位置的用户操作为例进行示意。电子设备100响应于该拖动操作,交换文本内容4121和文本内容4122的显示位置,此时可以显示图11的(B)所示的用户界面1100。和图11的(A)所示的用户界面800中的文本内容的排列顺序(文本内容4121显示在文本内容4122上面)不同,用户界面1100所示的文本卡片412中,文本内容4122显示在文本内容4121上面。不限于上述示例的情况,在另一些示例中,电子设备100可以响应于将用户界面800中的文本内容4121向下拖动至文本内容4122所在位置的用户操作,显示用户界面1100。
在一种实施方式中,在上图8所示的操作之后,电子设备100可以接收针对图8的(B)所示的用户界面800中的任一文本内容的触摸操作(例如向左或向右的拖动操作),响应于该触摸操作,调整该文本内容在语义汇聚卡片集中的显示位置,具体示例可参见图12。
如图12的(A)所示,电子设备100可以显示用户界面1210,用户界面1210可以包括图8的(B)所示的用户界面800中的文本卡片412和预览卡片1211,在一种实施方式中,用户界面1210可以是电子设备100响应于针对图8的(B)所示的用户界面800中的文本内容4121的拖动操作显示的用户界面,例如,该拖动操作用于将用户界面800中的文本内容4121向右拖动至用户界面1210中的文本内容4121所在的位置。预览卡片1211用于指示语义汇聚卡片集中排列在文本卡片412之后的图片卡片。
电子设备100可以接收作用于文本内容4121的拖动操作,例如,该拖动操作为上下拖动或左右拖动等,图12以该拖动操作为向屏幕右侧边缘拖动的用户操作(也可以理解为是将文本内容4121向预览卡片1211所在位置拖动的用户操作)为例进行示意。电子设备100响应于该拖动操作,将文本内容4121移动至预览卡片1211指示的图片卡片上显示,具体示例可参见图12的(B)所示的用户界面1220。和图4的(B)所示的用户界面420中的图片卡片421不同,用户界面1220中的图片卡片421不仅包括图片内容4211和图片内容4212(未示出),还包括文本内容4121。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对语义汇聚卡片集中的任一卡片的触摸操作(例如向左或向右的拖动操作),响应于该触摸操作,将该卡片和其他卡片合并为一张卡片,具体示例可参见图13。
如图13的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410。电子设备100可以接收作用于用户界面410中的文本卡片412的拖动操作,例如,该拖动操作为上下拖动或左右拖动等,图13以该拖动操作为将文本卡片412向屏幕右侧边缘拖动的用户操作为例进行示意。电子设备100响应于该用户操作,显示图13的(B)所示的用户界面1310。需要说明的是,电子设备100显示用户界面1310时,用户手指仍然触摸屏幕并位于文本卡片 412上,文本卡片412位于用户界面1310中的图片卡片421内部且显示在图片卡片421之上。电子设备100可以响应于用户的松手操作(即手指离开屏幕,可以理解为是上述拖动操作的一部分),合并文本卡片412和图片卡片421,例如将文本卡片412包括的内容移动到图片卡片421上显示,此时可以显示图13的(C)所示的用户界面1320。用户界面1320可以包括新卡片1321和页面选项1322,页面选项1322用于指示语义汇聚卡片集包括两张卡片,以及当前显示的新卡片1321为这两张卡片中的第一张卡片。新卡片1321为合并文本卡片412和图片卡片421得到的卡片,不仅包括图片卡片421中的图片内容4211和图片内容4212,而且包括文本卡片412中的文本内容4121、文本内容4122和文本内容4123,具体可参见图13的(D)所示的用户界面1320。图13的(D)所示的用户界面1320可以是电子设备100响应于针对图13的(C)所示的用户界面1320中的新卡片1321的滑动操作显示的用户界面,例如该滑动操作为左右滑动或上下滑动,图13以该滑动操作为从下往上滑动为例进行示意。
不限于上述示例的情况,在另一些示例中,在上图8所示的操作之后,电子设备100可以接收针对语义汇聚卡片集中的任一卡片的触摸操作(例如向左或向右的拖动操作),响应于该触摸操作,调整该卡片在语义汇聚卡片集中的显示位置,可以理解为是,在电子设备显示卡片的编辑界面时,响应于针对该卡片的触摸操作,调整该卡片的显示位置。例如,电子设备100可以响应于将图8的(B)所示的用户界面800中的文本卡片412向右拖动的用户操作,切换文本卡片412和图片卡片421的显示位置,此时图片卡片421为第一张卡片,文本卡片412为第二张卡片。
不限于上述示例的情况,在另一些示例中,也可以是在电子设备显示卡片的编辑界面时,响应于针对该卡片的触摸操作(例如向左或向右的拖动操作),将该卡片和其他卡片合并为一张卡片;在电子设备显示语义汇聚卡片集时,响应于针对语义汇聚卡片集中的任一卡片的触摸操作(例如向左或向右的拖动操作),调整该卡片在语义汇聚卡片集中的显示位置,本申请对触发调整卡片的用户操作不作限定。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对新增控件415的触摸操作(例如点击操作),响应于该触摸操作,在语义汇聚卡片集中新建卡片,具体示例可参见图14。
如图14的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410。电子设备100可以接收针对用户界面410中的新增控件415的触摸操作(例如点击操作),响应于该触摸操作,新建一张不包括任何内容的卡片,此时可以显示图14的(B)所示的用户界面1400。用户界面1400可以包括卡片1410和页面选项1420,卡片1410可以包括标题(“自定义卡片1”)和添加控件1410A,添加控件1410A用于在卡片1410中增加内容。和用户界面410中的页面选项413(包括三个选项)不同,页面选项1420包括四个选项,且第四个选项为选中状态,可以表征语义汇聚卡片集包括四张卡片,以及当前显示的卡片1410为这四张卡片中的第四张卡片。
在一种实施方式中,在上图14所示的操作之后,电子设备100可以接收针对图14的(B)所示的用户界面1400中的添加控件1410A的触摸操作(例如点击操作),响应于该触摸操作,显示图15的(A)所示的用户界面1510。
如图15的(A)所示,用户界面1510包括标题1511(“自定义卡片1”),可以指示用户界面1510用于在标题为“自定义卡片1”的卡片(简称自定义卡片1)中新增内容,自定义卡片1即为图14的(B)所示的用户界面1400中的卡片1410。用户界面1510还包括确定控件1512和多个卡片选项。多个卡片选项例如包括文本卡片的选项1513、图片卡片的选项1514 和视频卡片的选项1515,任意一个选项中显示的选择控件用于选择将对应的卡片包括的内容添加到自定义卡片1中或者取消该选择。在用户界面1510中,选项1513中显示的选择控件1513A和选项1514中显示的选择控件1514A为选中状态,选项1515中显示的选择控件1515A为未选中状态,可以表征当前选择将文本卡片和图片卡片包括的内容添加到自定义卡片1中。确定控件1512用于触发将选择的卡片包括的内容添加到自定义卡片1中。
电子设备100可以接收针对确定控件1512的触摸操作(例如点击操作),响应于该触摸操作,显示图15的(B)所示的用户界面1520。用户界面1520中的卡片1410包括:已选中的文本卡片(即图4的(A)所示的用户界面410中的文本卡片412)包括的文本内容1521A,以及已选中的图片卡片(即图4的(B)所示的用户界面420中的文本卡片421)包括的图片内容1521B,不包括未选中的视频卡片包括的视频内容。用户界面1520中的卡片1410还包括添加控件1410A,可以用于继续在卡片1410中添加内容。
不限于上述示例的情况,在另一些示例中,用户界面1520中的卡片1410也可以替换为图13的(C)和(D)所示的用户界面1320中的新卡片1321(此时标题需更改为“自定义卡片1”),类似地,图13的(C)和(D)所示的用户界面1320中的新卡片1321也可以替换为用户界面1520中的卡片1410(此时标题需更改为“新卡片”)。
在一种实施方式中,在上图14所示的操作之后,电子设备100可以接收针对图14的(B)所示的用户界面1400中的添加控件1410A的触摸操作(例如点击操作),响应于该触摸操作,显示图16的(A)所示的用户界面1610。
如图16的(A)所示,用户界面1610包括标题1611(“自定义卡片1”),用户界面1610和图15的(A)所示的用户界面1510类似,均用于在自定义卡片1中新增内容,区别在于,用户界面1610还可以将卡片包括的任意一个内容添加到自定义卡片1中。用户界面1610包括确定控件1612和多个卡片选项,多个卡片选项例如包括文本卡片的选项1613、图片卡片的选项1614和视频卡片的选项1615,任意一个选项中显示有对应的卡片包括的至少一个内容,以及每个内容对应的选择控件,任意一个选择控件用于将对应的内容添加到自定义卡片1中或者取消该选择。以选项1613为例进行说明,其他选项类似。选项1613中显示有文本卡片包括的文本内容4121、文本内容4122和文本内容4123,以及这三个文本内容分别对应的选择控件,例如文本内容4121对应的选择控件1613A。在用户界面1610中,文本内容4121对应的选择控件1613A、图片卡片421的选项1614中的图片内容4211对应的选择控件1614A为选中状态,其他选择控件为未选中状态,可以表征当前选择将文本内容4121和图片内容4211添加到自定义卡片1中。
电子设备100可以接收针对确定控件1612的触摸操作(例如点击操作),响应于该触摸操作,显示图16的(B)所示的用户界面1620。用户界面1620中的卡片1410包括上述已选中的文本内容4121、上述已选中的图片内容4211,不包括未选中的其他内容。用户界面1520中的卡片1410还包括添加控件1410A,可以用于继续在卡片1410中添加内容。
不限于上述示例的情况,在另一些示例中,也可以按照图15或图16所示的操作,在语义汇聚卡片集中的文本卡片、图片卡片和视频卡片等卡片中添加内容。
不限于上述实施方式,在另一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对新增控件415的触摸操作(例如点击操作),响应于该触摸操作,显示图15的(A)或图16的(A)所示的内容选择界面。电子设备100可以根据用户基于内容选择界面选择的网页内容新生成一张卡片,例如图15的(B)所示的用户界面1520中的卡片1410,或者图16的(B)所示的用户界面1620中的卡片1410。
在一种实施方式中,在上图14所示的操作之后,电子设备100可以接收针对卡片中任一内容的触摸操作(例如向左或向右的拖动操作),响应于该触摸操作,将该内容移动至图14的(B)所示的用户界面1400中的卡片1410中显示,具体示例可参见图17A。
如图17A的(A)所示,电子设备100可以显示图12的(A)所示的用户界面1210,用户界面1210中的文本卡片412包括文本内容4121。电子设备100可以接收作用于文本内容4121的拖动操作,例如,该拖动操作为左右拖动或上下拖动,图17A以该拖动操作为将文本内容4121向屏幕右侧边缘拖动的用户操作为例进行示意。电子设备100响应于该拖动操作,将文本内容4121移动至图14的(B)所示的用户界面1400中的卡片1410中显示,具体示例可参见图17A的(B)所示的用户界面1710。用户界面1710中的卡片1410包括文本内容4121和添加控件1410A,添加控件1410A例如用于通过图15或图16所示方式在卡片1410中增加内容。
不限于图17A所示的示例,在另一种实施方式中,还可以将语义汇聚卡片集中同一卡片或不同卡片上的多个内容移动到图14的(B)所示的用户界面1400中的卡片1410中显示,具体示例可参见图17B。
如图17B的(A)所示,电子设备100可以显示用户界面1720,用户界面1720和图8的(B)所示的用户界面800类似,区别在于,用户界面1720所示的文本卡片412包括的任意一个文本内容中显示有选择控件,用户界面1720中,文本内容4121中显示的选择控件1721为选中状态,文本内容4122和文本内容4123中显示的选择控件为未选中状态,可以表征当前已选择文本内容4121。
电子设备100还可以显示图17B的(B)所示的用户界面1730。用户界面1730和用户界面1720类似,区别在于,用户界面1730用于实现图片卡片421的编辑功能。用户界面1730中,图片内容4211中显示的选择控件1731为选中状态,图片内容4212中显示的选择控件为未选中状态,可以表征当前已选择图片内容4211。
电子设备100可以接收针对用户界面1730中的图片内容4211的触摸操作,例如,该触摸操作为左右拖动或者上下拖动,图17B以该触摸操作为将图片内容4211向屏幕右侧边缘拖动的用户操作为例进行示意。电子设备100响应于该触摸操作,将上述已选择的文本内容4121和图片内容4211移动到图14的(B)所示的用户界面1400中的卡片1410中显示,具体示例可参见图17B的(C)所示的用户界面1740。用户界面1740中的卡片1410包括上述已选择的文本内容4121和图片内容4211,还包括添加控件1410A,添加控件1410A例如用于通过图15或图16所示方式在卡片1410中增加内容。
在一种实施方式中,电子设备100显示语义汇聚卡片集(例如显示图4的(A)所示的用户界面410)时,可以接收针对保存控件414的触摸操作(例如点击操作),响应于该触摸操作,保存语义汇聚卡片集,具体示例可参见下图18。
如图18的(A)所示,电子设备100可以显示图4的(A)所示的用户界面410。电子设备100可以接收针对用户界面410中的保存控件414的触摸操作(例如点击操作),响应于该触摸操作,显示图18的(B)所示的用户界面1800。用户界面1800用于选择二次加载语义汇聚卡片集的入口的显示位置,用户界面1800中的提示框1810包括标题(“显示位置”)和多个显示位置的选项,多个显示位置的选项例如包括浏览器中收藏夹的选项1811、桌面的选项1812、负一屏的选项1813和图库的选项1814,任意一个选项中显示的选择控件用于选择将语义汇聚卡片集保存至该选项对应的显示位置或者取消该选择。在用户界面1800中,选项1811中显示的选择控件1811A、选项1812中显示的选择控件1812A、选项1813中显示的 选择控件1813A和选项1814中显示的选择控件1814A均为选中状态,可以表征当前选择将语义汇聚卡片集保存至浏览器应用的收藏夹、桌面、负一屏应用和图库应用中。可以理解地,更多或更少的选择控件可以被选中。用户界面1800还包括确定控件1815和取消控件1816,取消控件1816用于取消保存语义汇聚卡片集。电子设备100可以接收针对确定控件1815的触摸操作(例如点击操作),响应于该触摸操作,将语义汇聚卡片集保存至上述选中的显示位置。
在一种实施方式中,在上图18所示的操作之后,用户可以通过浏览器应用的收藏夹再次查看语义汇聚卡片集,具体示例可参见图19A。
如图19A的(A)所示,电子设备100可以显示浏览器的用户界面1910,用户界面1910中位于底部的导航栏可以包括控件1911,控件1911用于触发打开收藏夹。电子设备100可以接收针对控件1911的触摸操作(例如点击操作),响应于该触摸操作,显示图19A的(B)所示的用户界面1920。用户界面1920包括标题1921(“收藏夹”)和收藏列表1922,收藏列表1922例如包括收藏的卡片集的选项1922A和多个收藏的网页的选项,选项1922A包括的字符为收藏的卡片集的标题“西安旅游路线”。电子设备100可以接收针对选项1922A的触摸操作(例如点击操作),响应于该触摸操作,显示收藏的语义汇聚卡片集的具体内容,例如显示图19A的(C)所示的用户界面410(即图4的(A)所示的用户界面410)。
不限于图19A示例的情况,在另一种实施方式中,电子设备100可以响应于针对图19A的(A)所示的用户界面1910中的控件1911的触摸操作(例如点击操作),响应于该触摸操作,显示图19B所示的用户界面1930。用户界面1930包括多个页面选项,例如页面选项1931、页面选项1932和页面选项1933,其中,页面选项1931用于触发显示收藏的网页的列表,页面选项1932用于触发显示收藏的卡片集的列表,页面选项1933用于触发显示收藏的其他内容的列表。用户界面1930中的页面选项1932为选中状态,表征当前显示的是收藏的卡片集的列表1934,列表1934例如包括指示标题为“西安旅游路线”的卡片集的选项1934A和指示标题为“武汉旅游路线”的卡片集的选项。电子设备100可以响应于针对选项1934A的触摸操作(例如点击操作),显示对应的卡片集的具体内容,例如显示图4的(A)所示的用户界面410。
在一种实施方式中,在上图18所示的操作之后,用户可以通过桌面再次查看语义汇聚卡片集,具体示例可参见图20。
如图20的(A)所示,电子设备100可以显示用户界面2010,例如,用户界面2010为桌面。用户界面2010中的控件2011用于指示收藏的卡片集,控件2011包括标题2011A、应用信息2011B、卡片控件2011C和翻页控件2011D。其中,标题2011A包括的字符为收藏的卡片集的标题“西安旅游路线”。应用信息2011B用于指示收藏的卡片集属于浏览器应用。卡片控件2011C用于显示收藏的卡片集中的文本卡片(即图4的(A)所示的用户界面410中的文本卡片412)。翻页控件2011D用于触发切换卡片控件2011C中显示的卡片类型。控件2011可以每隔预设时长切换卡片控件2011C中显示的卡片类型,例如,5秒后,电子设备100可以显示图20的(B)所示的用户界面2020,用户界面2020所示的控件2011中的卡片控件2011C用于显示收藏的卡片集中的图片卡片(即图4的(B)所示的用户界面420中的图片卡片421)。电子设备100可以接收针对用户界面2010或用户界面2020中的控件2011的触摸操作(例如点击操作),响应于该触摸操作,显示收藏的语义汇聚卡片集的具体内容,例如显示图20的(C)所示的用户界面410(即图4的(A)所示的用户界面410)。
电子设备100可以接收针对用户界面2010或用户界面2020中的控件2011的触摸操作, 例如,该触摸操作为左右滑动或者上下滑动,图20以该触摸操作为从右往左滑动为例进行示意。电子设备100响应于该触摸操作,显示收藏的其他语义汇聚卡片集,例如显示图20的(D)所示的用户界面2030。用户界面2030和用户界面2010类似,区别在于,控件2011指示的收藏的卡片集不同。在用户界面2030中,控件2011中的标题2031A包括的字符为当前显示的卡片集的标题“武汉旅游路线”,控件2011中的卡片控件2031B用于显示该卡片集中的文本卡片。
在一些示例中,电子设备100可以接收针对图20的(A)所示的用户界面2010中的卡片控件2011C(用于显示文本卡片)的触摸操作(例如点击操作),响应于该触摸操作,显示该文本卡片,例如显示图4的(A)所示的用户界面410。在另一些示例中,电子设备100可以接收针对图20的(B)所示的用户界面2020中的卡片控件2011C(用于显示图片卡片)的触摸操作(例如点击操作),响应于该触摸操作,显示该图片卡片,例如显示图4的(B)所示的用户界面420。也就是说,用户操作针对的卡片控件显示的卡片类型不同时,电子设备100响应于该用户操作显示的卡片可以不同。类似地,电子设备100还可以响应于针对用于显示视频卡片的卡片控件的触摸操作(例如点击操作),响应于该触摸操作,显示该视频卡片,例如显示图4的(C)所示的用户界面430。
在一种实施方式中,在上图18所示的操作之后,用户可以通过负一屏应用再次查看语义汇聚卡片集,具体示例可参见图21。
如图21的(A)所示,电子设备100可以显示负一屏应用的用户界面2110。用户界面2110可以包括多个功能控件(例如扫一扫、乘车码、付款码和健康码等功能的控件)、应用使用情况的卡片、停车情况的卡片和控件2111。控件2111和图20的(A)所示的用户界面2010中的控件2011类似,均用于指示收藏的卡片集,且当前用于显示该卡片集中的文本卡片。控件2111还包括翻页控件2111A,电子设备100可以接收针对翻页控件2111A的触摸操作(例如点击操作),响应于该触摸操作,显示图21的(B)所示的用户界面2120。用户界面2120中的控件2111和图20的(B)所示的桌面2020中的控件2011类似,均用于指示收藏的卡片集,且当前用于显示该卡片集中的图片卡片。电子设备100可以接收针对用户界面2110或用户界面2120中的控件2111的触摸操作(例如点击操作),响应于该触摸操作,显示收藏的语义汇聚卡片集的具体内容,例如显示图21的(C)所示的用户界面410(即图4的(A)所示的用户界面410)。
在一些示例中,电子设备100可以接收针对用户界面2110或用户界面2120中的控件2111的触摸操作,例如,该触摸操作为左右滑动或者上下滑动,响应于该触摸操作,显示收藏的其他语义汇聚卡片集,例如,在负一屏应用的用户界面中显示图20的(D)所示的用户界面2030中的控件2011。图21所示示例和图20所示示例类似,用户操作针对的卡片控件显示的卡片类型不同时,电子设备100响应于该用户操作显示的卡片可以不同,具体示例可参见图20所示示例,不再赘述。
图20以根据时间切换卡片控件显示的卡片为例进行说明,图21以根据针对翻页控件的触摸操作切换卡片控件显示的卡片为例进行说明,在另一些示例中,电子设备100还可以响应于针对卡片控件的滑动操作(例如左右滑动或者上下滑动),切换卡片控件显示的卡片,本申请对具体触发方式不作限定。
不限于上述示例的情况,在另一些示例中,电子设备100还可以在其他应用的用户界面(例如浏览器应用的首页或者收藏夹界面)中显示上图20所示的控件2010,和/或上图21所示的控件2111(例如可以理解为是指示收集的卡片集的微件)。
在一种实施方式中,在上图18所示的操作之后,用户可以通过图库应用再次查看语义汇聚卡片集,具体示例可参见图22。
如图22的(A)所示,电子设备100可以显示图库应用的用户界面2210,用户界面2210可以包括多张图片的缩略图。电子设备100可以接收针对多个缩略图中的缩略图2211的触摸操作(例如点击操作),响应于该触摸操作,显示缩略图2211对应的原始图片,例如显示图22的(B)所示的用户界面2220。用户界面2220中的图片2221用于指示收藏的卡片集,图片2221包括标题2221A,图像2221B和二维码2221C,其中,标题2221A包括的字符为收藏的卡片集的标题“西安旅游路线”,图像2221B为收藏的卡片集的封面(以该卡片集中的图片卡片包括的第一张图片,即图4的(B)所示的用户界面420中的图片卡片421包括的图片内容4211为例),二维码2221C可以包括收藏的卡片集的标识信息,以用于指示该卡片集。用户界面2220中位于底部的功能栏包括识图控件2222,电子设备100可以接收针对识图控件2222的触摸操作(例如点击操作),响应于该触摸操作,识别图片2221,例如识别图片2221中的二维码2221C。电子设备100可以显示识别结果,即图片2221指示的语义汇聚卡片集的具体内容,例如显示图22的(C)所示的用户界面410(即图4的(A)所示的用户界面410)。
不限于上述示例的情况,在另一些示例中,用户还可以使用电子设备的扫一扫功能来扫描上述图片2221,例如识别其中的二维码2221C,并显示识别出来的语义汇聚卡片集的具体内容,例如显示图4的(A)所示的用户界面410,本申请对识别图片的具体方式不作限定。
不限于上述示例的二次加载语义汇聚卡片集的入口,在另一些示例中,还可以在记事本等其他应用中显示二次加载语义汇聚卡片集的入口,本申请对此不作限定。
在一种实施方式中,在上图3所示的操作之后,电子设备100可以接收针对图3的(B)所示的用户界面320中的任一网页卡片的触摸操作(例如点击操作),响应于该触摸操作,显示该网页卡片指示的网页的具体内容,具体示例可参见图23。
如图23的(A)所示,电子设备100可以显示图3的(B)所示的用户界面320,用户界面320中的网页列表例如包括网页卡片322(指示网页1)、网页卡片323(指示网页2)和网页卡片324(指示网页3)。电子设备100可以接收针对网页卡片322的触摸操作(例如点击操作),响应于该触摸操作,显示网页卡片322指示的网页1(标题为“西安旅游路线图”、网址为“网址111”和来源为“来源aaa”)的具体内容,具体可参见图23的(B)所示的用户界面2300。用户界面2300和图6的(B)所示的用户界面600类似,区别在于,用户界面2300中未突出显示用户界面600中的文本内容4121。用户界面320中的卡片控件325和用户界面2300中的卡片控件2310均用于触发显示语义汇聚卡片集,但触发显示的语义汇聚卡片集可以不同,其中,卡片控件325触发显示的语义汇聚卡片集包括的内容是从搜索得到的至少一个网页中提取出来的,卡片控件2310触发显示的语义汇聚卡片集包括的内容是从用户选择查看的网页1中提取出来的。
在一种实施方式中,在上图23所示的操作之后,电子设备100可以接收针对图23的(B)所示的用户界面2300中的卡片控件2310的触摸操作(例如点击操作),响应于该触摸操作,显示语义汇聚卡片集,具体可参见图24所示的用户界面。
图24和图4类似,均用于显示语义汇聚卡片集,区别在于,图4所示的语义汇聚卡片集包括的内容是从搜索得到的排列在前三位的网页1、网页2和网页3中提取出来的,图24所示的语义汇聚卡片集包括的内容是从用户选择查看的网页1中提取出来的。以下介绍图24时主要介绍图24和图4的不同之处,其他说明可参照图4的说明。
如图24的(A)所示,电子设备100可以显示浏览器的用户界面2410。用户界面2410可以包括用户选择查看的网页1的标题2411(“西安旅游路线图”)、网址信息和来源信息2412(“网址111来源aaa”),用户界面2410用于显示语义汇聚卡片集中的文本卡片2413,文本卡片2413包括网页1中多个和搜索关键词相关的文本内容,例如文本内容2413A(“热门的旅游路线有”)、文本内容2413B(“去西安一定要去秦始皇陵...秦始皇陵的门票120元...前往秦始皇陵的车费20元...)”和文本内容2413C(“西安热门景点...”),其中文本内容2413C未在图23的(B)所示的用户界面2300中示出。
电子设备100可以接收针对文本卡片2413的滑动操作,例如,该滑动操作为上下滑动或左右滑动,图24以该滑动操作为从右向左滑动为例进行示意。电子设备100响应于该用户操作,显示图24的(B)所示的用户界面2420。用户界面2420和图24的(A)所示的用户界面2410类似,区别在于,用户界面2420用于显示语义汇聚卡片集中的图片卡片2421,图片卡片2421包括网页1中和搜索关键词相关的图片内容。
电子设备100可以接收针对图片卡片2421的滑动操作,例如,该滑动操作为上下滑动或左右滑动,图24以该滑动操作为从右向左滑动为例进行示意。电子设备100响应于该用户操作,显示图24的(C)所示的用户界面2430。用户界面2430和图24的(A)所示的用户界面2410类似,区别在于,用户界面2430用于显示语义汇聚卡片集中的视频卡片2431,视频卡片2431包括网页1中和搜索关键词相关的视频内容。
在一种实施方式中,电子设备100显示网页1的具体内容(例如显示图23的(B)所示的用户界面2300)时,可以接收针对网页1中的任一内容的触摸操作(例如长按操作),响应于该触摸操作,选中该内容,具体示例可参见图25A的(A)所示的用户界面2300。
如图25A的(A)所示,电子设备100可以显示图23的(B)所示的用户界面2300。用户界面2300可以包括文本2320(“去西安一定要去秦始皇陵...秦始皇陵的门票120元...前往秦始皇陵的车费20元....”),其中文本“秦始皇陵”被选中,上述触摸操作可以理解为是用于选择网页1中的文本“秦始皇陵”的用户操作。用户界面2300中文本“秦始皇陵”附近可以显示有功能列表2330。功能列表2330可以包括针对选中文本“秦始皇陵”的多个功能的选项,例如包括复制功能的选项、搜索功能的选项、保存选项2330A和查看更多功能的选项。电子设备100可以接收针对保存选项2330A的触摸操作(例如点击操作),响应于该触摸操作,将上述选中的文本“秦始皇陵”和相关的文本添加到语义汇聚卡片集中的新建卡片内,此时可以显示图25A的(B)所示的用户界面2510。
如图25A的(B)所示,用户界面2510可以包括标题信息2511,标题信息2511包括用户当前查看的网页1的标题“西安旅游路线图”、网址信息“网址111”和来源信息“来源aaa”,可以指示用户界面2510显示的语义汇聚卡片集是根据网页1中的内容生成的。用户界面2510用于显示语义汇聚卡片集中的卡片2512。用户界面2510中的页面选项2513用于指示语义汇聚卡片集包括四张卡片,以及当前显示的卡片2512为这四张卡片中的第四张卡片,其中,这四张卡片中的前三张卡片例如为图24所示的文本卡片2413、图片卡片2421和视频卡片2431。卡片2512可以包括标题(“自定义卡片2”)和多个文本内容,多个文本内容例如包括上述选中的文本2512A(“秦始皇陵”)、关联内容2512B(“秦始皇陵的门票120元”)和关联内容2512C(“前往秦始皇陵的车费20元”),其中,关联内容2512B和关联内容2512C是网页1中和文本2512A语义相关的内容,这里以关联内容的类型为文本为例进行说明,在具体实现中,还可以是图片和视频等其他类型,本申请对关联内容的具体类型不作限定。卡片2512中还显示有分别对应多个文本内容的多个删除选项,例如,删除选项2512D用于删除对应的文本2512A, 删除选项2512E用于删除对应的关联内容2512B,删除选项2512F用于删除对应的关联内容2512C。用户界面2510还包括确定控件2514和取消控件2515,确定控件2514用于保存当前显示的卡片2512到语义汇聚卡片集中,取消控件2515用于取消保存当前显示的卡片2512。
电子设备100可以接收针对删除选项2512D的触摸操作(例如点击操作),响应于该触摸操作,在卡片2512中删除文本2512A,然后,电子设备100可以接收针对确定控件2514的触摸操作(例如点击操作),响应于该触摸操作,保存卡片2512,此时可以显示图25A的(C)所示的用户界面2520。用户界面2520中的卡片2512包括文本内容2512G,文本内容2512G包括字符“秦始皇陵的门票120元,前往秦始皇陵的车费20元”,即包括上述未被删除的关联内容2512B中显示的字符和关联内容2512C中显示的字符,不包括已被删除的文本内容2512A中显示的字符。
不限于上述示例的情况,在另一些示例中,图25A的(C)所示的用户界面2520也可以替换为图25B所示的用户界面2530,用户界面2530中的卡片2512包括上述未被删除的关联内容2512B和关联内容2512C,不包括已被删除的文本内容2512A。
在一种实施方式中,电子设备100可以响应于作用于图25B所示的用户界面2530中的关联内容2512B的触摸操作(例如点击操作),显示图25C所示的用户界面2540。如图25C所示,用户界面2540用于显示关联内容2512B所属的网页1(网址为“网址111”和来源为“来源aaa”),包括关联内容2512B(“秦始皇陵的门票120元”)的文本2320(“去西安一定要去秦始皇陵...秦始皇陵的门票120元...前往秦始皇陵的车费20元....”)在用户界面2540的上面显示,关联内容2512B在用户界面2540中突出显示(例如高亮显示)。
在一种实施方式中,在上图25A所示的操作之后,电子设备100可以接收针对网页1中的任一内容的触摸操作(例如滑动操作),响应于该触摸操作,选中该内容,具体示例可参见图26。
如图26的(A)所示,电子设备100可以显示图23的(B)所示的用户界面2300。用户界面2300可以包括图片2340A、图片2340A的标题2340B(“西安旅游热门景点”)和图片2340A的说明2340C(“上图主要展示了…”)。电子设备100可以接收作用于图片2340A的滑动操作,例如,该滑动操作为单指滑动、双指滑动或者指关节滑动,图26以该滑动操作为指关节围绕图片2340A画圈的滑动操作为例进行示意,其中,该滑动操作可以理解为是针对图片2340A的截图操作,该滑动操作也可以理解为是用于选择网页1中的图片2340A的用户操作。电子设备可以响应于该滑动操作,显示图26的(B)所示的用户界面2610。用户界面2610可以包括保存控件2611、编辑界面2612和位于底部的功能栏2613,其中,编辑界面2612用于展示用户操作针对的用户界面,即图26的(A)所示的用户界面2300。编辑界面2612中显示有编辑框2612A,编辑框2612A中显示有用户选中的图片2340A。用户可以操作编辑框2612A的形状、大小和位置,来改变选中的内容(即编辑框2612A中显示的内容)。功能栏2613例如包括分享功能的选项、图形选项2613A、矩形选项2613B、保存选项2613C和查看更多功能的选项,其中,图形选项2613A用于将编辑框2612A的形状设置为自由图形,设置该功能后,用户可操作编辑框2612A以使编辑框2612A变为任意规则或不规则的形状。矩形选项2613B用于将编辑框2612A的形状设置为矩形。
电子设备100可以接收针对保存控件2611或保存选项2613C的触摸操作(例如点击操作),响应于该触摸操作,将编辑框2612A中显示的图片2340A和相关的内容保存至图25A的(C)所示的用户界面2520中的卡片2512中,此时例如显示图26的(C)所示的用户界面2620。用户界面2620中的卡片2512包括:之前已保存的文本内容2512G,还包括待选择的内容列 表2512H。内容列表2512H包括多个内容和多个删除选项,其中,多个内容例如包括上述用户选中的图片2340A、网页1中图片2340A的标题2340B和网页1中图片2340A的说明2340C。每个删除选项对应一个内容,用于触发删除该内容。
不限于图26所示的截图操作,在另一些示例中,也可以是语音输入触发截图,本申请对截图操作的具体类型不作限定。
图25A以将用户选中的内容保存至语义汇聚卡片集中新建的自定义卡片为例进行说明,图26以将用户选中的内容保存至语义汇聚卡片集中已有的自定义卡片为例进行说明,在另一些示例中,还可以根据用户选中的内容的类型确定保存该选中内容的卡片,例如,当用户选中的内容为文本时,将该选中内容和关联内容保存至文本卡片。本申请对用于保存选中内容的卡片不作限定。
在另一种实施方式中,还可以由用户选择保存选中内容和关联内容的卡片。例如,在上图25A所示的操作之后,电子设备100可以接收针对网页1中的任一内容的触摸操作(例如拖动操作),响应于该触摸操作,显示该内容的显示位置的选择界面,具体示例可参见图27A。
如图27A的(A)所示,电子设备100可以显示图23的(B)所示的用户界面2300,用户界面2300包括卡片控件2310和图片2340A。电子设备100可以接收作用于图片2340A的拖动操作,例如,该拖动操作为往特定位置拖动的用户操作,图27A以该拖动操作为将图片2340A拖动至卡片控件2310所在位置的用户操作为例进行示意,该拖动操作可以理解为是用于选择网页1中的图片2340A的用户操作。电子设备100响应于该拖动操作,显示图27A的(B)所示的用户界面2700。用户界面2700用于选择保存选中内容和关联内容的卡片,用户界面2700中的提示框2710包括标题(“保存到卡片”)和多个卡片的选项,多个卡片的选项例如包括文本卡片的选项2711、图片卡片的选项2712、视频卡片的选项2713、标题为“自定义卡片2”的卡片(简称自定义卡片2)的选项2714和新建卡片的选项2715,自定义卡片2即为图25A的(C)所示的用户界面2520中的卡片2512。上述任意一个选项中显示的选择控件用于选择将选中内容保存至该选项对应的卡片或者取消该选择。在用户界面2700中,选项2711、选项2712、选项2713和选项2715中显示的选择控件均为未选中状态,选项2714中显示的选择控件2714A为选中状态,可以表征当前将选中内容保存至选项2714指示的自定义卡片2。可以理解地,更多或更少的选择控件可以被选中。提示框2710还包括确定控件2716和取消控件2717,确定控件2716用于将选中内容和关联内容保存至上述选择的卡片中,取消控件2717用于取消保存选中内容和关联内容至语义汇聚卡片集。电子设备100可以接收针对确定控件2716的触摸操作(例如点击操作),响应于该触摸操作,将图片2340A和相关的内容保存至选项2714指示的自定义卡片2,此时例如显示图26的(C)所示的用户界面2620。
不限于上述示例的情况,在另一些示例中,电子设备100也可以响应于将图27A的(A)所示的用户界面2300中的图片2340A拖动至卡片控件2310所在位置的用户操作,显示图26的(B)所示的用户界面2610,或直接显示图26的(C)所示的用户界面2620。在另一些示例中,电子设备100也可以响应于针对图25A的(A)所示的用户界面2300中的保存选项2330A的触摸操作(例如点击操作),显示图27A的(B)所示的用户界面2700。在另一些示例中,电子设备100也可以响应于指关节围绕图26的(A)所示的用户界面2300中的图片2340A画圈的用户操作,显示图27A的(B)所示的用户界面2700。在另一些示例中,电子设备100也可以响应于针对图26的(B)所示的用户界面2610中的保存控件2611或保存选项2613C的触摸操作(例如点击操作),显示图27A的(B)所示的用户界面2700。可以理 解地,上述用户操作仅为示例,例如,电子设备100也可以响应于针对图26的(A)所示的用户界面2300中的图片2340A的其他形式的截图操作(例如语音输入),显示图27A的(B)所示的用户界面2700,本申请对用户操作的形式不作限定。
图25A、图26和图27A示出了三种用于选择/选中网页中的内容的用户操作,在另一些示例中,还可以通过图26或图27A所示的用户操作选择网页1中的文本内容,在另一些示例中,还可以通过语音输入来选择网页中的内容,本申请对用于选择网页中的内容的具体用户操作不作限定。
图25A以电子设备100接收到用于选择网页中的内容的用户操作后生成四张卡片为例进行说明,其中,这四张卡片为图24所示的文本卡片2413、图片卡片2421、视频卡片2431以及图25A的(B)所示的自定义卡片2512。不限于此,在另一些示例中,电子设备100接收到用于选择网页中的内容的用户操作后也可以仅生成图25A的(B)所示的自定义卡片2512,不生成图24所示的文本卡片2413、图片卡片2421、视频卡片2431,本申请对此不作限定。在另一些示例中,电子设备100接收到用于选择网页中的内容(简称为选中内容)的用户操作后,也可以生成多张卡片,这多张卡片用于显示上述选中内容以及上述网页中和上述选中内容相关的内容(简称为关联内容)。可选地,这多张卡片分别用于显示不同类型的网页内容,具体示例可参见图27B。
图27B以上述选中内容和上述关联内容为图26的(C)所示的用户界面2620中的卡片2512包括的内容为例进行说明,上述选中内容和上述关联内容具体为:卡片2512中的文本内容2512G(包括文本内容2512B(“秦始皇陵的门票120元”)和文本内容2512C(“前往秦始皇陵的车费20元”))、标题2340B、图片2340A和说明2340C。电子设备100可以根据上述选中内容和上述关联内容生成图27B的(A)所示的用户界面2810中的文本卡片2811,以及图27B的(B)所示的用户界面2820中的图片卡片2821。如图27B的(A)所示,文本卡片2811用于显示文本内容:文本内容2512B、文本内容2512C、标题2340B、说明2340C。如图27B的(B)所示,图片卡片2821用于显示图片内容:图片2340A。
不限于上述示例的卡片,在另一些示例中,在卡片中显示视频内容时可以还显示该视频内容的标题、批注、说明等相关信息,本申请对在卡片中显示网页内容的具体方式不作限定。
基于以上实施例介绍本申请涉及的搜索方法。该方法可以应用于图1A所示的搜索***10。该方法可以应用于图1B所示的搜索***10。
请参见图28,图28是本申请实施例提供的一种搜索方法的流程示意图。该方法可以包括但不限于如下步骤:
S101:电子设备获取用户输入的第一搜索词。
在一种实施方式中,电子设备可以接收用户输入的第一搜索词(也可称为第一关键词或搜索关键词),以及接收用于触发针对第一关键词的搜索功能的用户操作,电子设备可以基于该用户操作获取用户输入的第一关键词。例如,电子设备100可以接收针对图3的(A)所示的用户界面310中的搜索栏311输入的关键词“西安旅游路线”,然后接收针对用户界面310中的搜索控件312的触摸操作(例如点击操作),电子设备可以基于该触摸操作获取用户输入的关键词“西安旅游路线”。本申请对第一关键词的形式不作限定,例如但不限于为文本、图片、音频等。
S102:电子设备向网络设备发送第一搜索请求。
在一种实施方式中,电子设备获取到用户输入的第一关键词后,可以向网络设备发送第 一搜索请求。第一搜索请求用于请求获取和第一关键词相关的搜索结果,可选地,第一搜索请求例如但不限于包括上述第一关键词。
S103:网络设备获取第一网页集合。
在一种实施方式中,网络设备接收到第一搜索请求后,可以获取和第一关键词相关的至少一个网页,即第一网页集合。例如,网络设备可以对第一关键词进行语义分析,并从网页数据库中获取和第一关键词相关的至少一个网页(即第一网页集合)。
S104:网络设备向电子设备发送第一网页集合。
S105:电子设备显示第一网页集合。
在一些示例中,S103-S105所述的第一网页集合包括图3的(B)所示的用户界面320中的多个网页卡片指示的网页:网页卡片322指示的网页1(标题为“西安旅游路线图”)、网页卡片323指示的网页2(标题为“西安旅游路线-攻略”)和网页卡片324指示的网页3(标题为“西安旅游全攻略”),其中,第一网页集合和第一关键词相关,第一关键词为图3的(A)所示的用户界面310包括的搜索栏311中显示的“西安旅游路线”。
在一种实施方式中,电子设备显示的第一网页集合是经过排序的,可选地,第一网页集合中和第一关键词的相关性越高的网页的显示位置越优先。例如,第一关键词为图3的(A)所示的用户界面310包括的搜索栏311中显示的“西安旅游路线”,图3的(B)所示的用户界面320示出了第一网页集合中排列在前三位的网页的概要信息(可称为网页卡片),由于这三个网页和第一关键词的相关性从高到低依次为:网页1(标题为“西安旅游路线图”)、网页2(标题为“西安旅游路线-攻略”)和网页3(标题为“西安旅游全攻略”),因此,用户界面320中,按照从上往下的顺序依次为:指示网页1的网页卡片322、指示网页2的网页卡片323和指示网页3的网页卡片324。
S106:电子设备接收第一用户操作。
在一种实施方式中,第一用户操作的形式可以但不限于为触摸操作(例如点击操作)、语音输入、运动姿态(例如手势)、脑电波等,例如,第一用户操作为针对图3的(B)所示的用户界面320中的卡片控件325的触摸操作。本申请对用户操作的具体形式不作限定。
S107:电子设备从第一网页集合包括的网页内容中获取第一内容集合。
在一种实施方式中,电子设备可以响应于第一用户操作,从第一网页集合包括的网页内容中筛选出符合用户搜索意图的网页内容(即第一内容集合)。第一内容集合包括的内容和第一关键词相关,例如,第一内容集合包括的内容是第一网页集合包括的网页内容中和第一关键词的相关性较强的内容。
接下来示例性示出一种获取第一内容集合的方式。
首先,电子设备可以从第一网页集合中提取出排列在前N个的网页的内容(可称为第二内容集合),N为正整数。例如,电子设备可以在后台静默加载这N个网页,并通过javascript(JS)脚本注入来获取这N个网页的超文本标记语言(hyper textmarkup language,html)代码,然后可以解析这N个网页的html代码来获取这N个网页中的文本、图片、视频、音频、文档等多种类型的内容,这些获取到的内容可以构成第二内容集合。在一种实施方式中,电子设备获取图片、视频、音频和文档等非文本的网页内容时,可以同时获取和这些网页内容相关的文本,例如名称、标题、批注等。在一种实施方式中,当电子设备无法从网页中获取到视频、音频和文档等需要下载的网页内容的原始文件时,可以获取用于查看原始文件的地址信息(例如原始文件的超链接、网页内容所属的网页的超链接等),和/或,用于指示网页内容的标识信息(例如标题、名称、封面图片和缩略图等)。
示例性地,如图29所示,第二内容集合可以包括文本集合、图片集合和视频集合,其中,文本集合包括q个文本内容:文本1、文本2、…、文本q,图片集合包括m个图片内容:图片1、图片2、…、图片m,视频集合包括n个视频内容:视频1、视频2、…、视频n,q、m和n为大于1的正整数。
然后,电子设备可以计算第二内容集合中的每个内容和第一关键词的相似度。在一种实施方式中,计算相似度可以包括两步:
第一,电子设备可以将第一关键词转换为M维的第一特征向量,并将第二内容集合中的每个内容转换为M维的第二特征向量,M为正整数。在一种实施方式中,若第一关键词属于图片、视频、音频和文档等非文本类型,电子设备可以先将第一关键词转换为文本,再将文本转换为第一特征向量。在一种实施方式中,对于第二内容集合中的文本内容,电子设备可以直接将文本内容转换为第二特征向量,对于第二内容集合中的图片、视频、音频和文档等非文本内容,电子设备可以先将非文本内容转换为文本,再将文本转换为第二特征向量。其中,转换非文本内容得到的文本例如但不限于包括:名称、标题、批注等和非文本内容相关的文本,以及对非文本内容进行解析后得到的文本。
示例性地,如图29所示,电子设备可以基于向量模型实现向量转换操作。图29以M为128为例进行说明,并将任意一维向量通过gi(i为小于或等于128的正整数)来表征。第一关键词输入向量模型得到的第一特征向量(也可称为关键词向量)可以表征为:(g1,g2,…,g128)p。对于第二内容集合,文本集合包括的文本1、文本2、…、文本q输入向量模型得到的第二特征向量(也可称为文本向量)可以分别表征为: 图片集合包括的图片1、图片2、…、图片m输入向量模型得到的第二特征向量(也可称为图片向量)可以分别表征为: 视频集合包括的视频1、视频2、…、视频n输入向量模型得到的第二特征向量(也可称为视频向量)可以分别表征为: 上述文本向量、图片向量和视频向量可以构成第二特征向量集合。
第二,电子设备可以计算每个第二特征向量和第一特征向量之间的相似度。
示例性地,如图29所示,电子设备可以通过点积计算得到第二特征向量和第一特征向量(g1,g2,…,g128)p之间的相似度。以上述文本1转换而来的文本向量为例进行说明相似度的计算方式,该文本向量和第一特征向量之间的相似度 其他相似度类似。因此,文本集合包括的文本1、文本2、…、文本q对应的相似度(可称为文本相似度)可以表征为图片集合包括的图片1、图片2、…、图片m对应的相似度(可称为图片相似度)可以表征为 视频集合包括的视频1、视频2、…、视频n对应的相似度(可称为视频相似度)可以表征为上述文本相似度、图片相似度和视频相似度可以构成相似度集合。
最后,电子设备可以将和第一特征向量的相似度大于或等于预设阈值的第二特征向量对应的网页内容筛选出来,这些筛选出来的内容可以构成第一内容集合。
示例性地,如图29所示,电子设备可以从相似度集合中筛选出大于或等于预设阈值的相似度,这些相似度对应的网页内容可以构成第一内容集合。假设在文本相似度集合中,文本1、文本3和文本4对应的文本相似度大于或等于预设阈值,因此,文本1、文本3和文本4可以构成相关文本集合。假设在图片相似度集合中,图片1和图片2对应的图片相似度大于或等于预设阈值,因此,图片1和图片2可以构成相关图片集合。假设在视频相似度集合中,视频1和视频3对应的视频相似度大于或等于预设阈值,因此,视频1和视频3可以构成相 关视频集合。上述相关文本集合、相关图片集合和相关视频集合可以构成第一内容集合。
基于上述示出的获取第一内容集合的方式进行场景示例:假设第一关键词为图3的(A)所示的用户界面310中的搜索栏311中显示的关键词“西安旅游路线”。假设q为4,m和n为3,第二内容集合中的文本集合可以包括文本1(“热门的旅游路线有…”)、文本2(“西安,古称长安…”)、文本3(“景点路线一:陕西历史博物馆—钟鼓楼...”)和文本4(“旅游路线:西仓—回民街...”)。图片集合可以包括图片1(旅游路线中的景点的图片)、图片2(旅游路线中的景点的图片)和图片3(特色食物图片)。视频集合可以包括视频1(旅游攻略视频,标题为“西安旅游攻略视频”)、视频2(游客采访视频,标题为“游客点赞西安”)和视频3(旅游攻略视频,标题为“西安3日游攻略”)。由于文本1、文本3、文本4、图片1、图片2、视频1和视频3和第一关键词的相似度大于或等于预设阈值,因此,第一内容集合包括图4的(A)所示的用户界面410中的文本内容4121(用于显示文本1)、文本内容4122(用于显示文本4)、文本内容4123(用于显示文本3),图4的(B)所示的用户界面420中的图片内容4211(用于显示图片1)、图片内容4212(用于显示图片2),图4的(C)所示的用户界面430中的视频内容4311(用于显示视频1)、视频内容4312(用于显示视频3)。
不限于上述示例的获取第一内容集合的方式,在另一种实施方式中在另一种实施方式中,电子设备也可以在计算得到第二内容集合中每个内容和第一关键词的相似度之后,比较属于同一网页的同一个类型的网页内容的相似度,并确定出其中相似度排列在前M位的网页内容(M为正整数),这些确定出来的网页内容可以构成第一内容集合,可选地,不同类型对应的M不同。例如,假设第二内容集合包括网页1中的文本1、文本2、图片1、图片2、图片3,文本1和第一关键词的相似度高于文本2和第一关键词的相似度,图片1-图片3按照和第一关键词的相似度从高到低依次为:图片3、图片1、图片2,假设文本对应的M为1,图片对应的M为2,则第一内容集合包括文本1、图片1和图片3。不限于此,也可以是比较属于同一网页的同一个类型的网页内容的相似度,并确定出其中相似度大于或等于预设阈值的网页内容,这些确定出来的网页内容可以构成第一内容集合,本申请对此不作限定。
在一种实施方式中,S107之前,该方法还包括:电子设备接收用户操作,响应于该用户操作,从第一网页集合中选择出第二网页集合。例如图5A所示的实施方式中,电子设备可以从图5A的(A)所示的用户界面510中的网页卡片511、网页卡片512和网页卡片513中选择出网页卡片511和网页卡片513,因此,第二网页集合为网页卡片511指示的网页1(标题为“西安旅游路线图”)和网页卡片513指示的网页3(标题为“西安旅游全攻略”)。在这种情况下,获取第一内容集合的方式和上述说明的获取第一内容集合的方式类似,区别在于,上述第一网页集合中排列在前N个的网页需替换为第二网页集合,即第二内容集合包括的网页内容为第二网页集合中的网页的内容,也就是说,第一内容集合是根据第二网页集合获取的。
S108:根据第一内容集合生成第一卡片集。
S109:电子设备显示第一卡片集。
在一种实施方式中,第一卡片集可以包括至少一张卡片。
在一种实施方式中,第一卡片集包括的任意一张卡片可以包括一种类型的网页内容,也就是说,不同卡片用于显示不同类型的网页内容,例如,文本、图片、视频、音频或文档等多种网页内容分别在不同的卡片上显示,不限于此,在另一些示例中,划分类型可以更细致,例如,静态图片和动态图片在不同的卡片上显示,在另一些示例中,划分类型可以更粗糙,例如,视频、音频和文档等文件在同一张卡片上显示。
示例性地,假设第一内容集合为S107中示例的:包括文本1(“热门的旅游路线有…”)、文本3(“景点路线一:陕西历史博物馆—钟鼓楼...”)、文本4(“旅游路线:西仓—回民街...”)、图片1(旅游路线中的景点的图片)、图片2(旅游路线中的景点的图片)、视频1(旅游攻略视频,标题为“西安旅游攻略视频”)和视频3(旅游攻略视频,标题为“西安3日游攻略”)。则S108-S109所述的第一卡片集可以包括三张卡片,这三张卡片分别包括文本内容、图片内容和视频内容。包括文本内容的卡片可参见图4的(A)所示的用户界面410中的文本卡片412,包括图片内容的卡片可参见图4的(B)所示的用户界面420中的图片卡片421,包括视频内容的卡片可参见图4的(C)所示的用户界面430中的视频卡片431。
不限于此,在另一种实施方式中,第一卡片集包括的任意一张卡片可以包括一个网页中的内容,也就是说,不同网页的内容在不同的卡片上显示。例如,假设第一内容集合为S107中示例的:包括文本1(属于网页1)、文本3(属于网页2)、文本4(属于网页3)、图片1(属于网页1)、图片2(属于网页2)、视频1(属于网页2)和视频3(属于网页1)。则S108-S109所述的第一卡片集可以包括三张卡片,这三张卡片分别包括网页1、网页2和网页3的内容。包括网页1的内容的卡片具体包括图4的(A)所示的用户界面410中的文本内容4121(用于显示文本1)、图4的(B)所示的用户界面420中的图片内容4211(用于显示图片1)、图4的(C)所示的用户界面430中的视频内容4312(用于显示视频3)。包括网页2的内容的卡片具体包括用户界面410中的文本内容4123(用于显示文本3)、用户界面420中的图片4212(用于显示图片2)、用户界面430中的视频内容4311(用于显示视频1)。包括网页3的内容的卡片具体包括用户界面410中的文本内容4122(用于显示文本4)。本申请对卡片的分类方式不作限定。
在一种实施方式中,电子设备在第一卡片集中显示图片、视频、音频和文档等非文本的网页内容时,可以同时显示这些网页内容相关的文本,例如名称、标题、批注等。
在一种实施方式中,对于视频、音频和文档等无法获取到原始文件的网页内容,电子设备可以在第一卡片集中显示这些网页内容的标识信息,以此指示这些网页内容,标识信息例如但不限于包括标题、名称、封面图片和缩略图等。例如,S108-S109所述的第一卡片集包括图4的(C)所示的用户界面430中的视频卡片431,视频卡片431中的视频内容4311和视频内容4312的显示形式为封面图片。
在一种实施方式中,对于第一卡片集包括的任意一张卡片,电子设备可以根据该卡片中的网页内容和第一关键词的相关性(例如通过S107中的相似度来表征)来确定网页内容的显示顺序,可选地,相关性越强,网页内容在该卡片中的显示顺序越优先。例如,S108-S109所述的第一卡片集包括图4的(A)所示的用户界面410中的文本卡片412。按照和第一关键词“西安旅游路线”的相关性从高到低,文本卡片412中显示的多个文本内容从上往下排列依次为:文本内容4121(“热门的旅游路线有…”)、文本内容4122(“旅游路线:西仓—回民街...”)和文本内容4123(“景点路线一:陕西历史博物馆—钟鼓楼...”)。
不限于此,在另一种实施方式中,对于第一卡片集包括的任意一张卡片,电子设备可以根据该卡片中的网页内容所属的网页的显示顺序来确定网页内容的显示顺序,可选地,网页内容所属的网页的显示顺序越优先,网页内容在该卡片中的显示顺序越优先,其中,网页的显示顺序例如是根据网页和第一关键词的相关性确定的,相关性越强,网页在网页列表中的显示顺序越优先。例如,S108-S109所述的第一卡片集包括图4的(B)所示的用户界面420包括的图片卡片421。图3的(B)所示的用户界面320包括的网页列表中,按照从上往下的顺序依次为指示网页1的网页卡片322、指示网页2的网页卡片323和指示网页3的网页卡 片324。因此,在图片卡片421中,网页1中的图片内容4211显示在网页2中的图片内容4212上面。
例如,假设第二网页集合为S107中示例的用户选择的网页1和网页3,并假设根据第二网页集合获取的第一内容集合包括:网页1中的文本1(“热门的旅游路线有…”)和网页3中的文本4(“旅游路线:西仓—回民街...”),则S108-S109所述的第一卡片集包括图5A的(B)所示的用户界面520中的文本卡片521。图3的(B)所示的用户界面320包括的网页列表中,按照从上往下的顺序依次为指示网页1的网页卡片322、指示网页2的网页卡片323和指示网页3的网页卡片324,因此,在文本卡片521中,文本内容4121(用于显示文本1)显示在文本内容4122(用于显示文本4)上面。
在另一种实施方式中,对于第一卡片集包括的任意一张卡片,电子设备可以根据该卡片中的网页内容所属的网页的显示顺序,以及网页内容和第一关键词的相关性(例如通过S107中的相似度来表征)一起确定这多个网页内容的显示顺序。可选地,对于属于不同网页的网页内容,所属的网页的显示顺序越优先,该网页内容在该卡片中的显示顺序越优先;对于属于同一个网页的网页内容,和第一关键词的相关性越强,该网页内容在该卡片中的显示顺序越优先。
例如,假设第一内容集合包括:网页1中的内容1(和第一关键词的相似度为0.7)、内容2(和第一关键词的相似度为0.8),网页2中的内容3(和第一关键词的相似度为0.5),网页3中的内容4(和第一关键词的相似度为0.9),这些内容均在语义汇聚卡片集中的卡片1上显示。假设网页的显示顺序从上往下依次为:网页1、网页2和网页3,并且由于网页1中的内容2对应的相似度大于内容1对应的相似度,因此,在卡片1中,按照从上往下的显示顺序依次为:内容2、内容1、内容3、内容4。其中,在一种情况下,内容1和内容2属于同一类型,例如均为文本,在另一种情况下,内容1和内容2属于不同类型,例如内容1为文本,内容2为图片。在一种实施方式中,S108之前,该方法还包括:电子设备接收用户操作,响应于该用户操作,确定第一卡片集包括的卡片,例如但不限于卡片的数量、类型、卡片包括的网页内容的数量。例如图5B所示的实施方式种,电子设备确定第一卡片集包括文本卡片、图片卡片和视频卡片,并且文本卡片最多包括3个文本内容,图片卡片最多包括2个图片内容,视频卡片最多包括2个视频内容。
在一种实施方式中,该方法还包括:电子设备显示第一卡片集时,可以接收针对第一卡片集中的任一内容的用户操作,响应于该用户操作,显示该内容所属的网页的详细内容,可选地,电子设备可以显示与该内容相关的网页内容,可选地,该网页内容可以定位到该网页内容所属的网页中的位置显示,可选地,电子设备显示的界面中该内容可以突出显示(例如高亮显示)。例如图6所示的实施方式中,电子设备可以响应于作用于图6的(A)所示的用户界面410中的文本卡片412包括的文本内容4121的触摸操作,显示图6的(B)所示的用户界面600。例如图25B和图25C所示的实施方式种,电子设备100可以响应于作用于图25B所示的用户界面2530中的关联内容2512B的触摸操作,显示图25C所示的用户界面2540。可选地,电子设备可以根据记录的该网页的地址信息(例如统一资源定位***(uniform resource locator,url))跳转到该网页的内容页显示。
在一种实施方式中,该方法还包括:电子设备显示第一卡片集时,可以接收针对第一卡片集中的任一卡片的用户操作,响应于该用户操作,显示该卡片中的其他内容。例如,图13的(C)所示的用户界面1320中的卡片1321显示有图片内容4211和图片内容4212,电子设备可以接收针对图13的(C)所示的用户界面1320中的卡片1321的滑动操作,响应于该滑 动操作,显示卡片1321中的其他内容,例如显示图13的(D)所示的用户界面1320,图13的(D)所示的用户界面1320中的图片卡片421中显示有文本内容4121、文本内容4122和文本内容4123。
S110:电子设备接收第二用户操作。
S111:电子设备修改第一卡片集。
在一种实施方式中,电子设备可以响应于第二用户操作,修改第一卡片集,并显示修改后的第一卡片集。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片的第二用户操作,在第一卡片集中删除该卡片。例如图7A所示的实施方式中,电子设备可以响应于针对图7A所示的用户界面410中的文本卡片412的拖动操作,删除文本卡片412,此时可以显示图7A的(B)所示的用户界面700。例如图7B所示的实施方式种,电子设备可以响应于针对图7B的(A)所示的用户界面410中的删除控件4124的点击操作,删除删除控件4124对应的文本卡片412,此时可以显示图7B的(B)所示的用户界面700。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片的第二用户操作,调整该卡片在第一卡片集中的显示位置。例如,电子设备显示图13的(A)所示的用户界面410时,可以接收针对用户界面410中的文本卡片412的拖动操作,例如,该拖动操作为左右拖动或上下拖动,并且用户手指离开屏幕时电子设备还显示了图13的(B)所示的用户界面1310中的图片卡片421,电子设备响应于该拖动操作,切换文本卡片412和图片卡片421的显示位置。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片的第二用户操作,将该卡片和其他卡片合并为一张卡片。例如图13所示的实施方式中,电子设备可以接收针对图13的(A)所示的用户界面410中的文本卡片412的拖动操作,并且用户手指离开屏幕时电子设备还显示了图13的(B)所示的用户界面1310中的图片卡片421,电子设备响应于该拖动操作,合并文本卡片412和图片卡片421,合并后的卡片可参见图13的(C)和(D)所示的用户界面1320中的卡片1321。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片的用户操作,显示用于编辑该卡片的用户界面。例如图8所示的实施方式中,电子设备可以响应于针对图8的(A)所示的用户界面410中的文本卡片412的长按操作,显示文本卡片412的编辑界面:图8的(B)所示的用户界面800。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片中的任一内容的第二用户操作,显示该内容的编辑界面,用户可以基于该编辑界面对该内容进行编辑(如增加和删除)。例如图9所示的实施方式中,电子设备可以接收用户输入的字符并添加到文本内容4121中。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片中的任一内容的第二用户操作,在第一卡片集中删除该内容。例如图10所示的实施方式中,电子设备可以响应于针对图10的(A)所示的用户界面800中的删除控件810的点击操作,在文本卡片810中删除删除控件810对应的文本内容4121,此时可以显示图10的(B)所示的用户界面1000。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片中的任一内容的第二用户操作,调整该内容在该卡片中的显示位置。例如图11所示的实施方式中,电子设备可以响应于针对图11的(A)所示的用户界面800中的文本内容4122的拖动操作,切换文本内容4121和文本内容4122的显示位置。
在一种实施方式中,电子设备可以响应于针对第一卡片集中的任一卡片中的任一内容的第二用户操作,调整该内容的显示卡片(即用于显示该网页内容的卡片),也可以理解为是将该网页内容移动至用户选择的其他卡片上显示,也可理解为是调整该内容在第一卡片集中的显示位置。例如图12所示的实施方式中,电子设备可以接收针对图12的(A)所示的用户界面1210包括的文本卡片412中的文本内容4121的拖动操作,并且用户手指离开屏幕时电子设备还显示了图12的(B)所示的用户界面1220中的图片卡片421,电子设备响应于该拖动操作,将文本内容4121移动至图片卡片421上显示(即将文本内容4121的显示卡片从文本卡片412调整为图片卡片421)。
在一种实施方式中,电子设备可以响应于针对第一卡片集的第二用户操作,在第一卡片集中新建卡片(可称为自定义卡片)。例如图14所示的实施方式中,电子设备可以响应于针对图14的(A)所示的用户界面410中的新增控件415的点击操作,新建卡片,即图14的(B)所示的用户界面1400中的卡片1410。
在一种实施方式中,电子设备可以响应于用户操作,在自定义卡片中添加第一卡片集包括的网页内容。添加方式可以但不限于包括以下三种情况:
在一种情况下,电子设备可以将用户选择的卡片包括的全部内容添加到自定义卡片中。例如图15所示的实施方式中,图15的(A)所示的用户界面1510中的选项1513A对应的文本卡片和选项1514A对应的图片卡片已被用户选择,电子设备可以将图4的(A)所示的用户界面410中的文本卡片412和图4的(B)所示的用户界面420中的图片卡片421包括的内容添加到自定义卡片中,此时,自定义卡片为图15的(B)所示的用户界面1520中的卡片1410。
在另一种情况下,电子设备可以将用户选择的第一卡片集中的内容添加到自定义卡片中。例如图16所示的实施方式中,图16的(A)所示的用户界面1610中的选项1613A对应的文本内容4121和选项1614A对应的图片内容4211已被用户选择,电子设备可以将文本内容4121和图片内容4211添加到自定义卡片中,此时,自定义卡片为图16的(B)所示的用户界面1620中的卡片1410。
在另一种情况下,电子设备可以响应于针对第一卡片集中的任一卡片中的任一内容的第二用户操作,将该内容移动至自定义卡片中。例如图17A所示的实施方式中,电子设备可以响应于针对图17A的(A)所示的用户界面1210包括的文本卡片412中的文本内容4121的拖动操作,将文本内容4121移动至自定义卡片上显示,此时,自定义卡片为图17A的(B)所示的用户界面1710中的卡片1410。例如图17B所示的实施方式中,图17B的(A)所示的用户界面1720包括的文本卡片412中的文本内容4121,以及图17B的(B)所示的用户界面1730包括的图片卡片421中的图片内容4211已被选择,电子设备可以响应于针对图片内容4211的拖动操作,将上述已选择的文本内容4121和图片内容4211移动至自定义卡片上显示,此时,自定义卡片为图17B的(C)所示的用户界面1740中的卡片1410。
S112:电子设备保存第一卡片集的信息。
在一种实施方式中,电子设备可以响应于用户操作,保存第一卡片集的信息。例如,电子设备可以响应于针对图4的(A)所示的用户界面410中的保存控件414的触摸操作(例如点击操作),保存第一卡片集的信息。
在一种实施方式中,电子设备可以将第一卡片集的信息保存到电子设备的存储器中,后续可以从存储器中获取第一卡片集的信息以用于二次加载第一卡片集。在另一种实施方式中,电子设备可以将第一卡片集的信息发送至云端服务器保存,后续可以向云端服务器获取第一 卡片集的信息以用于二次加载第一卡片集。
在一种实施方式中,电子设备保存的第一卡片集的信息可以包括但不限于:第一卡片集的基础信息、资源信息、位置信息和网页信息等信息。其中,基础信息为用户搜索的第一关键词。资源信息包括第一卡片集中的卡片的数量,以及第一卡片集中的多个网页内容(可以理解为是网页内容的集合),可选地,资源信息包括网页内容的原始文件,可选地,资源信息包括图片、视频、音频和文档等非文本的网页内容的相关信息,例如但不限于包括以下至少一项:名称、标题、批注、封面图片、缩略图、原始文件的地址信息(如超链接)等。例如,对于视频、音频和文档等无法获取到原始文件的网页内容,可以将这些网页内容的相关信息作为资源信息保存起来。位置信息包括卡片在第一卡片集中的显示位置(例如是第一卡片集中的第几张卡片),以及每个网页内容在第一卡片集中的显示位置(例如是第一卡片集中的第几张卡片的第几个网页内容),可选地,位置信息可以用于电子设备后续显示第一卡片集时确定绘制卡片和网页内容的位置。网页信息包括每个网页内容所属的网页的地址信息(例如url),可选地,网页信息可以用于实现显示网页内容所属的网页的内容页的功能,例如,电子设备可以响应于针对第一卡片集中的任一内容的双击操作,使用网页信息中该内容所属的网页的地址信息跳转至该网页,此时可以显示该网页的具体内容。
例如,假设第一卡片集包括三张卡片:图4的(A)所示的用户界面410中的文本卡片412、图4的(B)所示的用户界面420中的图片卡片421和图4的(C)所示的用户界面430中的视频卡片431。因此,电子设备保存的第一卡片集的基础信息为用户搜索的关键词,即用户界面410中的标题411显示的“西安旅游路线”。电子设备保存的第一卡片集的资源信息为第一卡片集中的网页内容的集合,具体包括:文本卡片412中的文本内容4121、文本内容4122和文本内容4123,图片卡片421中的图片内容4211和图片内容4212,视频卡片431中的视频内容4311和视频内容4312。电子设备保存的第一卡片集的位置信息包括卡片在第一卡片集中的显示位置,例如文本卡片412是第一卡片集中的第一张卡片。位置信息还包括资源信息中的每个网页内容在第一卡片集中的显示位置,例如文本内容4121是第一卡片集中的第一张卡片(即文本卡片412)中的第一个内容。电子设备保存的第一卡片集的网页信息包括资源信息中的每个网页内容所属的网页的地址信息,例如文本内容4121所属的网页1的url,即用户界面410中文本内容4121对应的来源信息4121A包括的“网址111”。
S113:电子设备显示至少一个入口控件。
在一种实施方式中,任意一个入口控件用于触发显示第一卡片集。可选地,电子设备显示至少一个入口控件也可以理解为是电子设备设置至少一种二次加载第一卡片集的方式,任意一个入口控件可以实现一种二次加载第一卡片集的方式。
在一种实施方式中,电子设备可以响应于用户操作,确定上述至少一个入口控件,可以理解为是用户可以选择二次加载第一卡片集的入口。例如图18所示的实施方式中,图18的(B)所示的用户界面1800中,选项1811指示的浏览器的收藏夹、选项1812指示的桌面、选项1813指示的负一屏、选项1814指示的图库已被用户选择,可以表征电子设备会在上述已被选择的位置显示二次加载第一卡片集的入口。
在一种实施方式中,电子设备显示入口控件之前,可以先生成入口控件,具体示例如下所示:
例如,电子设备可以在浏览器应用的收藏夹中生成/创建可点击的组件(即入口控件),该组件中可以存储有第一卡片集的标识信息,该标识信息可以用于电子设备接收到针对该组件的用户操作(如点击操作)时,确定待显示/加载的第一卡片集。
例如,电子设备可以在桌面或负一屏应用中生成/创建微件形式的入口控件,并存储入口控件和第一卡片集的标识信息的对应关系,该标识信息可以用于电子设备接收到针对该入口控件的用户操作时,确定待显示/加载的第一卡片集。
例如,电子设备可以在图库中生成一张图片,该图片可以包括二维码或其他形式的内容,以二维码为例进行说明,该二维码中包含第一卡片集的标识信息。电子设备可以通过识别该二维码获取到第一卡片集的标识信息,从而确定待显示/加载的第一卡片集。
接下来示出一些电子设备显示入口控件的示例:
例如图19A所示的实施方式中,图19A的(B)所示的用户界面1920为浏览器应用的收藏夹界面,用户界面1920中的选项1922A是指示基础信息为“西安旅游路线”的卡片集的入口控件。
例如图19B所示的实施方式中,用户界面1930为浏览器应用的收藏夹界面,用户界面1930中的列表1934包括指示基础信息为“西安旅游路线”的卡片集的入口控件(即选项1934A),以及指示基础信息为“武汉旅游路线”的卡片集的入口控件。
例如图20所示的实施方式中,图20的(A)所示的用户界面2010为桌面,用户界面2010中的控件2011是指示基础信息为“西安旅游路线”的卡片集的入口控件。
例如图21所示的实施方式中,图21的(A)所示的用户界面2110为负一屏应用的用户界面,用户界面2110中的控件2111是指示基础信息为“西安旅游路线”的卡片集的入口控件。
例如图22所示的实施方式中,图22的(B)所示的用户界面2220为图库应用的用户界面,用户界面2220中的图片2221用于指示基础信息为“西安旅游路线”的卡片集,用户界面2220中的识图控件2222为该卡片集的入口控件。
S114:电子设备接收针对第一入口控件的第三用户操作。
在一种实施方式中,第一入口控件为上述至少一个入口控件中的任意一个入口控件。
S115:电子设备显示第一卡片集。
在一种实施方式中,电子设备可以响应于第三用户操作,显示第一卡片集,在一些示例中,电子设备可以打开用于实现第一卡片集的显示功能的应用程序(例如浏览器),并在该应用程序的用户界面中显示第一卡片集,在另一些示例中,电子设备可以调用第一入口控件所在的应用程序中的显示组件,通过该显示组件显示第一卡片集。
S114-S115的场景示例如下所示:
例如图19A所示的实施方式中,电子设备可以接收针对图19A的(B)所示的用户界面1920中的选项1922A的触摸操作,显示图19A的(C)所示的用户界面410。
例如图19B所示的实施方式中,电子设备可以接收针对用户界面1930中的选项1934A的触摸操作,显示图19A的(C)所示的用户界面410。
例如图20所示的实施方式中,电子设备可以接收针对图20的(A)所示的用户界面2010中的控件2011的触摸操作,显示图20的(C)所示的用户界面410。
例如图21所示的实施方式中,电子设备可以接收针对图21的(A)所示的用户界面2110中的控件2111的触摸操作,显示图21的(C)所示的用户界面410。
例如图22所示的实施方式中,电子设备可以接收针对图22的(B)所示的用户界面2220中的识图控件2222的触摸操作,显示图22的(C)所示的用户界面410。不限于此,电子设备还可以响应于用于扫描图22的(B)所示的用户界面2220中的图片2221的用户操作,显示图22的(C)所示的用户界面410,电子设备例如使用聊天应用或支付应用中的扫一扫功 能。
在一种实施方式中,第三用户操作还用于选择第一卡片集中的卡片1,则S115可以包括:电子设备响应于第三用户操作,显示第一卡片集中的卡片1。例如图20所示的实施方式中,电子设备响应于针对图20的(A)所示的用户界面2010中的卡片控件2011C(用于显示文本卡片),显示图4的(A)所示的用户界面410。电子设备响应于针对图20的(B)所示的桌面2020中的卡片控件2011C(用于显示图片卡片),显示图4的(B)所示的用户界面420。
在一种实施方式中,该方法还包括:电子设备可以响应于针对第一入口控件的用户操作,切换第一入口控件中显示的卡片集。例如图20所示的实施方式中,电子设备可以响应于针对图20的(B)所示的桌面2020中的控件2011(用于显示基础信息为“西安旅游路线的卡片集”)的滑动操作,显示图20的(D)所示的用户界面2030,用户界面2030中的控件2011用于显示基础信息为“武汉旅游路线”的卡片集。
在一种实施方式中,S115包括:电子设备可以根据第一卡片集的标识信息获取保存的第一卡片集的信息,并根据该信息显示第一卡片集。可选地,第一卡片集的标识信息可以是根据第一入口控件确定的,例如,电子设备存储有第一入口控件和第一卡片集的标识信息的对应关系,当接收到针对第一入口控件的第三用户操作时,根据该对应关系确定出和第一入口控件对应的第一卡片集的标识信息,可选地,第一入口控件显示卡片集1的内容时,根据该对应关系确定出来的是卡片集1的标识信息,第一入口控件显示卡片集2的内容时,根据该对应关系确定出来的是卡片集2的标识信息。
示例性地,电子设备可以根据保存的第一卡片集的基础信息获取到第一关键词,根据保存的第一卡片集的资源信息确定第一卡片集包括的卡片的数量以及每张卡片中的网页内容,根据保存的第一卡片集的位置信息确定资源信息中的卡片、网页内容的绘制位置/顺序,根据保存的第一卡片集的网页信息实现查看第一卡片集中的网页内容所属的网页的内容页的功能。
在一种实施方式中,第一卡片集的信息保存在电子设备的存储器中,电子设备可以从存储器中获取第一卡片集的信息。在另一种实施方式中,第一卡片集的信息保存在云端服务器中,电子设备可以向云端服务器发送请求消息(例如包括第一卡片集的标识信息),并接收云端服务器基于该请求消息返回的第一卡片集的信息。
在一种实施方式中,用户仍然可以对S115中的第一卡片集进行查看和编辑,例如但不限于双击内容查看所属网页的内容页、滑动查看卡片中的更多内容、拖动卡片以删除卡片、拖动卡片以调整卡片的显示位置、将多张卡片合并为一张卡片、长按进入卡片的编辑界面、点击进入内容的编辑界面、点击删除卡片中的内容、拖动内容以调整内容在卡片中的显示位置、拖动内容以调整该内容的显示卡片、新建卡片、菜单选择内容添加到新建的卡片中、拖动内容移动到新建的卡片中等,具体可参见S109-S111的说明,不再赘述。
可以理解地,第一网页集合中的任意一个网页可以包括和第一关键词相关的内容,也可以包括和第一关键词无关的内容,增加了用户寻找符合搜索意图的内容的难度。而本申请可以从第一网页集合包括的网页内容中筛选出和第一关键词相关的内容,并通过第一卡片集显示这些筛选出来的内容,让用户可以通过第一卡片集快速获取搜索结果中符合用户搜索意图、有价值的内容,无需点击多个网页和浏览完整个网页,减少用户整理搜索结果的时间,提高搜索效率。其中,第一卡片集可以包括图片、视频、音频、文档等多种类型的网页内容,并不局限于文本类型,内容更加丰富,更加符合用户需求,用户体验感更好。
同时,本申请支持对第一卡片集进行查看、修改、保存和二次查看/加载,满足用户的个性化需求,无需用户复制网页内容到记事本等应用中再修改和保存,也无需用户收藏整个网 页并在下次查看时再次浏览完整个网页寻找所需内容(也无法修改),减少用户操作和二次查看/加载的时间。并且二次查看/加载的方式多种多样(例如通过浏览器的收藏夹、桌面、负一屏和图库等应用显示二次加载卡片集的入口),灵活性强,增加了用户再次获取第一卡片集的速度和准确性,用户使用更加方便。
请参见图30,图30是本申请实施例提供的又一种搜索方法的流程示意图。该方法可以包括但不限于如下步骤:
S201:电子设备接收针对第一网页集合中的第一网页的第四用户操作。
在一种实施方式中,S201为可选的步骤。
在一种实施方式中,第一网页集合为图28所示的S103-S105中描述的第一网页集合。
在一种实施方式中,S201之前,电子设备可以执行图28所示的S101-S105。
在一种实施方式中,第一网页为第一网页集合中的任意一个网页。
例如,图23的(A)所示的用户界面320中的网页列表示出了第一网页集合中的三个网页的概要信息(可称为网页卡片):指示网页1的网页卡片322、指示网页2的网页卡片323和指示网页3的网页卡片324。电子设备可以接收针对网页卡片322的触摸操作,网页卡片322指示的网页1即为第一网页。
S202:电子设备向网络设备请求获取第一网页的显示数据。
在一种实施方式中,电子设备可以响应于第四用户操作,向网络设备发送请求消息,以请求获取第一网页的显示数据。
S203:网络设备向电子设备发送第一网页数据。
在一种实施方式中,网络设备可以基于电子设备发送的请求消息,向电子设备发送第一网页数据,即第一网页的显示数据。
S204:电子设备根据第一网页数据显示第一网页。
例如,电子设备接收到针对图23的(A)所示的用户界面320中的网页卡片322的触摸操作之后,可以显示网页卡片322指示的网页1的具体内容,即显示图23的(B)所示的用户界面2300。
S205:电子设备接收第五用户操作。
在一种实施方式中,第五用户操作用于触发查看针对当前显示的第一网页的第二卡片集,第二卡片集包括第一网页中的内容。例如,第五用户操作为针对图23的(B)所示的用户界面2300中的卡片控件2310的点击操作。
S206:电子设备从第一网页的内容中获取第三内容集合。
在一种实施方式中,电子设备可以从第一网页的内容中筛选出符合用户搜索意图的第三内容集合。在一种实施方式中,第三内容集合包括的内容和第一关键词相关,例如,第三内容集合包括的内容是第一网页的内容中和第一关键词的相关性较强的内容。
电子设备获取第三内容集合的方式和图28的S107中描述的获取第一内容集合的方式类似,接下来示例性说明电子设备获取第三内容集合的方式,需要说明的是,下面主要说明二者的区别之处,其他说明可参见图28的S107中获取第一内容集合的方式的说明。
在一种实施方式中,电子设备可以先计算第一网页中每个内容和第一关键词的相似度,然后将相似度大于或等于预设阈值的网页内容筛选出来,这些筛选出来的内容可以构成第三内容集合。
在另一种实施方式中,电子设备可以先计算第一网页中每个内容和第一关键词的相似度, 然后比较属于同一类型的网页内容的相似度,并确定出其中相似度排列在前M位的网页内容(M为正整数),这些确定出来的网页内容可以构成第三内容集合。
S207:电子设备根据第三内容集合生成第二卡片集。
S208:电子设备显示第二卡片集。
在一种实施方式中,第二卡片集可以包括至少一张卡片。
在一种实施方式中,第二卡片集包括的任意一张卡片可以包括一种类型的网页内容,例如,文本、图片、视频、音频或文档等多种网页内容分别在不同的卡片上显示,不限于此,在另一些示例中,划分类型可以更细致,例如,静态图片和动态图片在不同的卡片上显示,在另一些示例中,划分类型可以更粗糙,例如,视频、音频和文档等文件在同一张卡片上显示。
示例性地,假设第一网页为图23的(B)所示的用户界面2300显示的网页1,S206-S207所述的第三内容集合包括:用户界面2300中的文本1(“热门的旅游路线有...”)、文本2(“去西安一定要去秦始皇陵...秦始皇陵的门票120元...前往秦始皇陵的车费20元...”)、网页1中的文本3(“西安热门景点”)(未在用户界面2300中示出)、用户界面2300中的图片1、网页1中的视频1(未在用户界面2300中示出)。则S207-S208所述的第二卡片集可以包括三张卡片,这三张卡片分别包括文本内容、图片内容和视频内容。包括文本内容的卡片可参见图24的(A)所示的用户界面2410中的文本卡片2413(用于显示文本1、文本2和文本3),包括图片内容的卡片可参见图24的(B)所示的用户界面2420中的图片卡片2421(用于显示图片1),包括视频内容的卡片可参见图24的(C)所示的用户界面2430中的视频卡片2431(用于显示视频1)。
在一种实施方式中,对于第二卡片集包括的任意一张卡片,电子设备可以根据该卡片中的网页内容和第一关键词的相关性(例如通过相似度来表征)来确定网页内容的显示顺序,可选地,相关性越强,网页内容在该卡片中的显示顺序越优先。例如可参见图24的(A)所示的用户界面2410中的文本卡片2413,按照和第一关键词“西安旅游路线”的相关性从高到低,文本卡片2413中显示的多个文本内容从上往下排列依次为:文本内容2413A(“热门的旅游路线有…”)、文本内容2413B(“去西安一定要去秦始皇陵...秦始皇陵的门票120元...前往秦始皇陵的车费20元...”)和文本内容2413C(“西安热门景点...”)。
在一种实施方式中,用户可以对第二卡片集进行查看和编辑,例如但不限于双击内容查看所属网页的内容页、滑动查看卡片中的更多内容、拖动卡片以删除卡片、拖动卡片以调整卡片的显示位置、将多张卡片合并为一张卡片、长按进入卡片的编辑界面、点击进入内容的编辑界面、点击删除卡片中的内容、拖动内容以调整内容在卡片中的显示位置、拖动内容以调整该内容的显示卡片、新建卡片、菜单选择内容添加到新建的卡片中、拖动内容移动到新建的卡片中等,具体可参见图28的S109-S111中第一卡片集的说明,具体示例和图6、图7A、图7B、图8-图16、图17A和图17B类似,不再赘述。
S209:电子设备接收针对第一网页中的第一内容的第六用户操作。
在一种实施方式中,第一内容为第一网页包括的任意一个网页内容。
接下来示例性示出一些第一内容和第六用户操作:
例如图25A所示的实施方式中,第一内容为图25A的(A)所示的用户界面2300中的文本“秦始皇陵”,第六用户操作为针对用户界面2300包括的功能列表2330中的保存选项2330A的触摸操作。
例如图26所示的实施方式中,第一内容为图26的(A)所示的用户界面2300中的图片 2340A,第六用户操作包括作用于图片2340A的滑动操作,和针对图26的(B)所示的用户界面2610中的保存控件2611或保存选项2613C的触摸操作。
例如图27A所示的实施方式中,第一内容为图27A的(A)所示的用户界面2300中的图片2340A(,第六用户操作为作用于图片2340A的拖动操作。
S210:电子设备从第一网页的内容中获取和第一内容关联的至少一个第二内容。
在一种实施方式中,电子设备可以按照预设规则获取和第一内容关联的至少一个第二内容。例如,第一内容为图26的(A)所示的用户界面2300中的图片2340A,则可以获取到两个第二内容:用户界面2300所示的图片2340A的标题2340B(“西安旅游热门景点”)和图片2340A的说明2340C(“上图主要展示了…”)。
在另一种实施方式中,第二内容可以是第一网页的内容中和第一内容的相关性(例如通过相似度来表征)较强的网页内容。在一些示例中,电子设备可以先计算第一网页中每个内容和第一内容的相似度,然后将相似度大于或等于预设阈值的网页内容筛选出来作为第二内容,计算相似度的说明可参见图28的S107中的相似度的说明。不限于此,在另一些示例中,电子设备也可以将相似度排列在前M位的网页内容筛选出来作为第二内容(M为正整数)。例如,第一内容为文本“西安旅游路线一:…”,则第二内容可以是“西安旅游路线二:…”。例如,第一内容为图25A的(A)所示的用户界面2300中的文本2320包括的文本“秦始皇陵”,则可以获取到两个第二内容:文本2320包括的文本“秦始皇陵的门票120元”、文本2320包括的文本“前往秦始皇陵的车票20元”。
S211:电子设备根据第一内容和上述至少一个第二内容显示第二卡片集。
在一种实施方式中,电子设备可以先在第二卡片集中新建一张卡片(可称为自定义卡片),并在自定义卡片内显示第一内容和上述至少一个第二内容,该自定义卡片为S208中的第二卡片集包括的卡片以外的卡片。
在一种实施方式中,一张自定义卡片可以仅包括一个网页的网页内容,例如,电子设备接收到针对第一网页中的第一内容的第六用户操作后,可以在第二卡片集中新建自定义卡片1,自定义卡片1用于显示第一网页的内容。电子设备接收到针对第二网页中的网页内容的用户操作后,可以在第二卡片集中新建自定义卡片2,自定义卡片2用于显示第二网页的内容。可选地,电子设备可以在显示自定义卡片1时显示第一网页的信息,以指示自定义卡片1用于显示第一网页的内容,例如,假设第一网页为图25A的(A)所示的用户界面2300显示的网页1,假设按照S210的示例,第一内容为文本“秦始皇陵”,两个第二内容为文本“秦始皇陵的门票120元”和文本“前往秦始皇陵的车票20元”。电子设备可以通过图25A的(B)所示的用户界面2510显示卡片2512(即自定义卡片1),卡片2512包括文本内容2512A(用于显示第一内容)、关联内容2512B和关联内容2512C(用于显示两个第二内容)。用户界面2510还包括标题信息2511,标题信息2511中显示有网页1的标题“西安旅游路线图”、网址信息“网址111”和来源信息“来源aaa”。
在一种实施方式中,电子设备可以按照第一内容和第二内容在第一网页中的显示顺序,在自定义卡片中显示第一内容和第二内容,具体示例如下所示:
例如图25A所示的实施方式中,由于图25A的(A)所示的用户界面2300显示的网页1中,文本“秦始皇陵”显示在文本1(“秦始皇陵的门票120元”)之前,文本1又显示在文本2(“前往秦始皇陵的车费20元”)之前,因此,图25A的(B)所示的用户界面2510包括的自定义卡片2512中,按照从上往下依次显示:文本2512A(用于显示文本“秦始皇陵”)、关联内容2512B(用于显示文本1)、关联内容2512C(用于显示文本2)。
例如图26所示的实施方式中,由于图26的(A)所示的用户界面2300显示的网页1中,图片2340A上面显示有标题2340B(“西安旅游热门景点”),下面显示有说明2340C(“上图主要展示了…”),因此,图26的(B)所示的用户界面2620包括的自定义卡片2512中,按照从上往下依次显示:图片2340A、标题2340B和说明2340C。
在另一种实施方式中,电子设备也可以根据第一内容、第二内容和第一关键词的相关性(例如通过相似度来表征),来确定第一内容和第二内容在自定义卡片中的显示顺序,可选地,相关性越强,网页内容在自定义卡片中的显示顺序越优先。
不限于上述示例的情况,在另一种实施方式中,电子设备也可以在第二卡片集中的已有卡片内显示第一内容和第二内容,显示方式和上述在自定义卡片中显示第一内容和第二内容的方式类似。可选地,假设文本、图片和视频这三种网页内容分别在第二卡片集中的不同卡片上显示,电子设备可以根据第一内容和/或第二内容的类型,确定显示第一内容和第二内容的卡片,例如,第一内容为文本时,在包括文本内容的卡片上显示第一内容和第二内容;或者,第一内容为文本,第二内容为图片时,在包括文本内容的卡片上显示第一内容,在包括图片内容的卡片上显示第二内容。
在另一种实施方式中,电子设备也可以响应于用户操作,确定第二卡片集中用于显示第一内容和第二内容的卡片。例如图27A所示的实施方式中,图27A的(B)所示的用户界面2700中,自定义卡片的选项2714中显示的选择控件2714A为选中状态,可以表征自定义卡片已被选择。电子设备可以将第一内容和第二内容保存到自定义卡片中,此时,自定义卡片为图26的(C)所示的用户界面2620所示的卡片2512。
不限于上述示例的方法流程,在另一种实施方式中,电子设备可以不接收第五用户操作(即不执行S205),在S204之后接收第六用户操作(即执行S209),电子设备可以响应于第六用户操作,执行S206-S207和执行S210,然后显示第二卡片集(具体可参见S208和S211的说明),其中,S206-S207和S210的顺序不作限定。例如,按照S209的示例一,第六用户操作为针对图25A的(A)所示的用户界面2300中的保存选项2330A的点击操作。电子设备可以响应于第六用户操作,生成第二卡片集,第二卡片集包括四张卡片:图24的(A)所示的用户界面2410中的文本卡片2413、图24的(B)所示的用户界面2420中的图片卡片2421、图24的(C)所示的用户界面2430中的视频卡片2431和图25A的(B)所示的用户界面2510中的自定义卡片2512,其中,前三张卡片用于显示第三内容集合,第四张卡片用于显示第一内容和上述至少一个第二内容。
不限于上述示例的方法流程,在另一种实施方式中,电子设备也可以先执行S209-S211,再执行S205-S208,可选地,S211具体为:电子设备根据第一内容和至少一个第二内容生成第二卡片集,第二卡片集包括至少一个卡片,这至少一个卡片包括第一内容和至少一个内容。S207具体为:电子设备根据第三内容集合在第二卡片集中生成至少一个卡片,这至少一个卡片的说明可参见S207中第二卡片集包括的卡片的说明。在一些示例中,S211具体包括:电子设备根据第一内容和至少一个第二内容生成一个自定义卡片,该自定义卡片包括第一内容和至少一个第二内容,例如,该自定义卡片为图25A的(B)所示的自定义卡片2512,或者图26的(C)所示的自定义卡片2512。在另一些示例中,S211具体包括:电子设备根据第一内容和至少一个第二内容生成多个卡片,这多个卡片包括第一内容和至少一个第二内容,可选地,这多个卡片用于显示不同类型的内容,具体说明和图28的S109中的不同卡片用于显示不同类型的网页内容的说明类似,不再赘述。例如,这多个卡片为图27B的(A)所示的用户界面2810中的文本卡片2811,以及图27B的(B)所示的用户界面2820中的图片卡 片2821。
不限于上述示例的方法流程,在另一种实施方式中,电子设备也可以仅执行S209-S211,不执行S205-S208。
不限于上述示例的方法流程,在另一种实施方式中,电子设备也可以仅执行S205-S208,不执行S209-S211。
S212:电子设备保存第二卡片集的信息。
在一种实施方式中,S212和图28的S112类似,具体可参见图28的S112的说明。在一种实施方式中,第二卡片集中的自定义卡片仅包括第一网页的内容,电子设备可以保存第一网页的信息,例如但不限于包括第一网页的标题和地址信息(如url)等,第一网页的信息可以用于在显示自定义卡片时一起显示(具体示例可参见S211中的电子设备可以在显示自定义卡片1时显示第一网页的信息的说明)。
在一种实施方式中,电子设备还可以显示至少一个入口控件(用于触发显示第二卡片集),具体说明和图28的S113类似,不再赘述。
在一种实施方式中,电子设备可以响应于针对任意一个入口控件的用户操作,显示第二卡片集,具体说明和图28的S114-S115类似,不再赘述。
不限于上述示例的情况,在一种实施方式中,电子设备可以响应于用户操作,取消将第一内容或任意一个第二内容添加到第二卡片集中,例如图25A所示的实施方式中,图25A的(B)所示的用户界面2510中的文本卡片2511包括文本内容2512A(用于显示第一内容)、文本内容2512B(用于显示一个第二内容)和文本内容2512C(用于显示另一个第二内容)。用户界面2510中的删除选项2512D用于触发删除文本内容2512A,用户界面2510中的删除选项2512E用于触发删除文本内容2512B,用户界面2510中的删除选项2512F用于触发删除文本内容2512C。也就是说,用户可以自行选择是否保留选中的第一内容,以及和第一内容关联的任意一个第二内容,使用更灵活,用户体验感更好。
可以理解地,第一网页集合中的任意一个网页可以包括和第一关键词相关的内容,也可以包括和第一关键词无关的内容,增加了用户寻找符合搜索意图的内容的难度。而本申请可以在用户选择查看第一网页的具体内容后,从第一网页的内容中筛选出和第一关键词相关的内容,并通过第二卡片集显示这些筛选出来的内容,让用户可以通过第二卡片集快速获取第一网页中符合用户搜索意图、有价值的内容,无需用户浏览完整个网页,减少用户整理搜索结果的时间,提高搜索效率,用户体验感更好。第二卡片集和第一卡片集一样,可以包括多种类型的内容,也支持查看、修改、保存和二次查看/加载,因此具有和第一卡片集一样的效果,具体可参见图28中第一卡片集的效果,不再赘述。
并且,本申请支持用户自定义选中第一网页中的内容添加到第二卡片集中,也就是说,可以将同一个网页中电子设备自动筛选的内容和用户选中的内容保存到同一个功能窗口(即第二卡片集)下,灵活性强,大大方便了用户的阅读和二次查看/加载。电子设备还可以从第一网页的内容中筛选出和第一内容相关的第二内容,并通过第二卡片集显示第一内容和第二内容,无需用户人工查找和添加第二内容,进一步减少获取符合用户搜索意图的内容的时间。
不限于上述示例的情况,在另一些示例中,电子设备也可以不筛选和第一内容相关的第二内容,第二卡片集不显示第二内容,本申请对此不作限定。
基于以上实施例的说明,可以理解的是,电子设备不仅可以提取搜索功能返回的多个网页中和搜索关键词相关的网页内容,也可以提取用户查看的网页中和搜索关键词相关的网页内容,并以卡片集的形式为用户展示这些提取出来的内容。并且,电子设备还可以将用户在 网页内容页中选中的内容,以及该内容页中和该选中内容关联的内容,一起以卡片集的形式展示出来。因此,用户只需通过卡片集就可以快速获取到搜索结果中符合搜索意图的内容,大大减少了用户整理搜索结果和查找所需内容的时间。并且,该卡片集可以支持阅读,查看内容所属的原始网页,进行增加、删除、修改、调整顺序等编辑操作,保存和二次加载等功能,满足用户多方面的需求,使用更加方便。
其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以上,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请各实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DWD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD)等。以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (30)

  1. 一种搜索方法,其特征在于,应用于电子设备,所述方法包括:
    获取用户输入的第一关键词;
    向网络设备发送第一搜索请求,所述第一搜索请求包括所述第一关键词;
    接收所述网络设备基于所述第一搜索请求发送的搜索结果集合;
    显示第一界面,所述第一界面包括所述搜索结果集合,所述搜索结果集合包括第一搜索结果和第二搜索结果,所述第一搜索结果和第一网页相关,所述第二搜索结果和第二网页相关;
    接收第一用户操作;
    响应所述第一用户操作,生成第一卡片集,所述第一卡片集包括第一卡片,所述第一卡片包括第一内容和第二内容,所述第一网页包括所述第一内容,所述第二网页包括所述第二内容;
    在所述第一用户操作之后,显示第二界面,所述第二界面包括所述第一卡片。
  2. 如权利要求1所述的方法,其特征在于,所述第一内容和所述第二内容均与所述第一关键词相关联。
  3. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第一界面中的所述第一搜索结果的第二用户操作;
    显示所述第一网页;
    接收第三用户操作;
    响应所述第三用户操作,生成第二卡片集,所述第二卡片集包括第二卡片,所述第一卡片包括第三内容和第四内容,所述第三内容和所述第四内容均为所述第一网页的内容;
    显示第三界面,所述第三界面包括所述第二卡片。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    接收用户作用于所述第一网页上的选择操作;
    根据所述选择操作获取第一信息;
    接收第四用户操作;
    响应所述第四用户操作,生成第三卡片,第三卡片为所述第二卡片集中的卡片,所述第三卡片包括第五内容和第六内容,所述第五内容和所述第六内容均为所述第一网页的内容,所述第五内容和所述第六内容均与所述第一信息相关联;
    显示第四界面,所述第四界面包括所述第三卡片。
  5. 如权利要求1所述的方法,其特征在于,所述第一卡片集还包括第四卡片,所述第一卡片包括第一类型的内容,所述第四卡片包括第二类型的内容,所述第一类型和所述第二类型不同。
  6. 如权利要求1所述的方法,其特征在于,所述生成第一卡片集,包括:
    接收第五用户操作;
    响应所述第五用户操作,从所述搜索结果集合中选择所述第一搜索结果和所述第二搜索 结果;
    根据所述第一搜索结果和所述第二搜索结果生成所述第一卡片集。
  7. 如权利要求1所述的方法,其特征在于,当所述第一内容和所述第一关键词的相似度大于所述第二内容和所述第一关键词的相似度时,所述第一内容位于所述第二内容之前;或者,
    当所述第一搜索结果位于所述第二搜索结果之前时,所述第一内容位于所述第二内容之前。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述显示第二界面之后,所述方法还包括:
    显示第五界面,所述第五界面包括第一控件,所述第五界面为所述电子设备的桌面,负一屏或第一应用的收藏界面,所述第一控件与所述第一卡片集相关联;
    接收作用于所述第一控件的第六用户操作;
    显示第六界面,所述第六界面包括所述第一卡片集中的第五卡片。
  9. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第二界面中的所述第一内容的第七用户操作;
    显示所述第一网页。
  10. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第二界面中的所述第一卡片的第八用户操作;
    响应所述第八用户操作,删除所述第一卡片;
    或者,接收作用于所述第二界面中的所述第一内容的第九用户操作;
    响应所述第九用户操作,删除所述第一内容。
  11. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第二界面中的所述第一内容的第十用户操作;
    响应所述第十用户操作,修改所述第一内容为第七内容;
    在所述第一卡片中显示所述第七内容。
  12. 如权利要求1-7任一项所述的方法,其特征在于,所述第一卡片中的所述第一内容位于所述第二内容之前;所述方法还包括:
    接收作用于所述第二界面中的所述第一内容的第十一用户操作;
    响应所述第十一用户操作,调整所述第一内容和所述第二内容在所述第一卡片中的显示位置;
    显示所述调整后的所述第一卡片,所述调整后的所述第一卡片中的所述第一内容位于所述第二内容之后。
  13. 如权利要求1-7任一项所述的方法,其特征在于,所述第一卡片集还包括第六卡片;所述方法还包括:
    接收作用于所述第二界面中的所述第一内容的第十二用户操作;
    响应所述第十二用户操作,将所述第一内容从所述第一卡片中移动至所述第六卡片中;
    响应所述第十二用户操作,所述第一卡片不包括所述第一内容,所述第六卡片包括所述第一内容。
  14. 如权利要求1-7任一项所述的方法,其特征在于,所述第一卡片集还包括第七卡片;所述方法还包括:
    接收作用于所述第二界面中的所述第一卡片的第十三用户操作;
    响应所述第十三用户操作,将所述第一卡片和所述第七卡片合并为第八卡片其中,所述第八卡片包括所述第一卡片中的内容和所述第七卡片中的内容。
  15. 如权利要求1-7任一项所述的方法,其特征在于,所述方法还包括:
    接收第十四用户操作;
    显示第七界面,所述第七界面包括所述第一卡片集中的卡片的内容;
    接收作用于所述第七界面上的第十五用户操作;
    根据所述第十五用户操作获取所述第一卡片集中的第九卡片包括的第八内容;
    接收第十六用户操作;
    响应所述第十六用户操作,生成第十卡片,所述第十卡片为所述第一卡片集中的卡片,所述第十卡片包括所述第八内容;
    显示第八界面,所述第八界面包括所述第十卡片。
  16. 一种搜索方法,其特征在于,应用于电子设备,所述方法包括:
    显示第一网页;
    获取与所述第一网页相关的第一信息,其中,所述第一信息为搜索所述第一网页使用的第一关键词,或者所述第一信息为根据用户作用于所述第一网页上的选择操作获取的信息;
    接收第一用户操作;
    响应所述第一用户操作,生成第一卡片集,所述第一卡片集包括第一卡片,所述第一卡片包括第一内容和第二内容,所述第一内容和所述第二内容均为所述第一网页的内容,所述第一内容和所述第二内容均与所述第一信息相关联;
    在所述第一用户操作之后,显示第一界面,所述第一界面包括所述第一卡片。
  17. 如权利要求16所述的方法,其特征在于,所述显示第一网页,包括:
    获取用户输入的所述第一关键词;
    向网络设备发送第一搜索请求,所述第一搜索请求包括所述第一关键词;
    接收所述网络设备基于所述第一搜索请求发送的搜索结果集合;
    显示第二界面,所述第二界面包括所述搜索结果集合,所述搜索结果集合包括第一搜索结果和第二搜索结果,所述第一搜索结果和所述第一网页相关,所述第二搜索结果和第二网页相关;
    接收作用于所述第一搜索结果的第二用户操作;
    响应所述第二用户操作,显示所述第一网页。
  18. 如权利要求16所述的方法,其特征在于,当所述第一信息为根据用户作用于所述第一网页上的选择操作获取的信息时,所述第一信息包括所述第一网页中的文本、图片、音频和视频中的至少一项。
  19. 如权利要求18所述的方法,其特征在于,所述第一信息包括所述第一网页中的文本、图片、音频和视频中的至少一项时,所述第一卡片集还包括第二卡片,所述第一卡片包括第一类型的内容,所述第二卡片包括第二类型的内容,所述第一类型和所述第二类型不同。
  20. 如权利要求16所述的方法,其特征在于,当所述第一内容和所述第一信息的相似度大于所述第二内容和所述第一信息的相似度时,所述第一卡片中所述第一内容位于所述第二内容之前;或者,
    当所述第一网页中所述第一内容位于所述第二内容之前时,所述第一卡片中所述第一内容位于所述第二内容之前。
  21. 如权利要求16-20任一项所述的方法,其特征在于,所述显示第一界面之后,所述方法还包括:
    显示第三界面,所述第三界面包括第一控件,所述第三界面为所述电子设备的桌面,负一屏或第一应用的收藏界面,所述第一控件与所述第一卡片集相关联;
    接收作用于所述第一控件的第三用户操作;
    显示第四界面,所述第四界面包括所述第一卡片集中的第三卡片。
  22. 如权利要求16-20任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第一界面中的所述第一内容的第四用户操作;
    显示所述第一网页。
  23. 如权利要求16-20任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第一界面中的所述第一卡片的第五用户操作;
    响应所述第五用户操作,删除所述第一卡片;
    或者,接收作用于所述第一界面中的所述第一内容的第六用户操作;
    响应所述第六用户操作,删除所述第一内容。
  24. 如权利要求16-20任一项所述的方法,其特征在于,所述方法还包括:
    接收作用于所述第一界面中的所述第一内容的第七用户操作;
    响应所述第七用户操作,修改所述第一内容为第三内容;
    在所述第一卡片中显示所述第三内容。
  25. 如权利要求16-20任一项所述的方法,其特征在于,所述第一卡片中的所述第一内容位于所述第二内容之前;所述方法还包括:
    接收作用于所述第一界面中的所述第一内容的第八用户操作;
    响应所述第八用户操作,调整所述第一内容和所述第二内容在所述第一卡片中的显示位置;
    显示所述调整后的所述第一卡片,所述调整后的所述第一卡片中的所述第一内容位于所述第二内容之后。
  26. 如权利要求16-20任一项所述的方法,其特征在于,所述第一卡片集还包括第四卡片;所述方法还包括:
    接收作用于所述第一界面中的所述第一内容的第九用户操作;
    响应所述第九用户操作,将所述第一内容从所述第一卡片中移动至所述第四卡片中;
    响应所述第九用户操作,所述第一卡片不包括所述第一内容,所述第四卡片包括所述第一内容。
  27. 如权利要求16-20任一项所述的方法,其特征在于,所述第一卡片集还包括第五卡片;所述方法还包括:
    接收作用于所述第一界面中的所述第一卡片的第十用户操作;
    响应所述第十用户操作,将所述第一卡片和所述第五卡片合并为第六卡片其中,所述第六卡片包括所述第一卡片中的内容和所述第五卡片中的内容。
  28. 如权利要求16-20任一项所述的方法,其特征在于,所述方法还包括:
    接收第十一用户操作;
    显示第五界面,所述第五界面包括所述第一卡片集中的卡片的内容;
    接收作用于所述第五界面上的第十二用户操作;
    根据所述第十二用户操作获取所述第一卡片集中的第七卡片包括的第四内容;
    接收第十三用户操作;
    响应所述第十三用户操作,生成第八卡片,所述第八卡片为所述第一卡片集中的卡片,所述第八卡片包括所述第四内容;
    显示第六界面,所述第六界面包括所述第八卡片。
  29. 一种电子设备,其特征在于,包括收发器、处理器和存储器,所述存储器用于存储计算机程序,所述处理器调用所述计算机程序,用于执行如权利要求1-28任一项所述的方法。
  30. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序被处理器执行时,实现权利要求1-28任一项所述的方法。
PCT/CN2023/100894 2022-06-24 2023-06-17 一种搜索方法及电子设备 WO2023246666A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210724383.1 2022-06-24
CN202210724383.1A CN117332138A (zh) 2022-06-24 2022-06-24 一种搜索方法及电子设备

Publications (1)

Publication Number Publication Date
WO2023246666A1 true WO2023246666A1 (zh) 2023-12-28

Family

ID=89281730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/100894 WO2023246666A1 (zh) 2022-06-24 2023-06-17 一种搜索方法及电子设备

Country Status (2)

Country Link
CN (1) CN117332138A (zh)
WO (1) WO2023246666A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120166973A1 (en) * 2010-12-22 2012-06-28 Microsoft Corporation Presenting list previews among search results
CN104156405A (zh) * 2014-03-11 2014-11-19 百度在线网络技术(北京)有限公司 搜索方法、***和装置
CN105630908A (zh) * 2015-12-21 2016-06-01 北京奇虎科技有限公司 搜索结果的展示方法和装置
CN113536157A (zh) * 2020-04-21 2021-10-22 阿里巴巴集团控股有限公司 一种搜索结果的生成、推送和交互展示方法及装置和***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120166973A1 (en) * 2010-12-22 2012-06-28 Microsoft Corporation Presenting list previews among search results
CN104156405A (zh) * 2014-03-11 2014-11-19 百度在线网络技术(北京)有限公司 搜索方法、***和装置
CN105630908A (zh) * 2015-12-21 2016-06-01 北京奇虎科技有限公司 搜索结果的展示方法和装置
CN113536157A (zh) * 2020-04-21 2021-10-22 阿里巴巴集团控股有限公司 一种搜索结果的生成、推送和交互展示方法及装置和***

Also Published As

Publication number Publication date
CN117332138A (zh) 2024-01-02

Similar Documents

Publication Publication Date Title
WO2020238356A1 (zh) 界面显示方法、装置、终端及存储介质
CN111465918B (zh) 在预览界面中显示业务信息的方法及电子设备
CN110119296B (zh) 切换父页面和子页面的方法、相关装置
WO2021082835A1 (zh) 启动功能的方法及电子设备
WO2021000841A1 (zh) 一种生成用户头像的方法及电子设备
CN111078091A (zh) 分屏显示的处理方法、装置及电子设备
CN112130714B (zh) 可进行学习的关键词搜索方法和电子设备
WO2021088881A1 (zh) 一种选择图片的方法和电子设备
WO2022100221A1 (zh) 检索处理方法、装置及存储介质
CN111970401B (zh) 一种通话内容处理方法、电子设备和存储介质
US20220343648A1 (en) Image selection method and electronic device
US20230367464A1 (en) Multi-Application Interaction Method
CN115877995A (zh) 应用图标的显示方法、电子设备及可读存储介质
WO2020073317A1 (zh) 文件管理方法及电子设备
CN112740148A (zh) 一种向输入框中输入信息的方法及电子设备
WO2021196980A1 (zh) 多屏交互方法、电子设备及计算机可读存储介质
WO2023246666A1 (zh) 一种搜索方法及电子设备
WO2022089276A1 (zh) 一种收藏处理的方法及相关装置
WO2023160455A1 (zh) 删除对象的方法及电子设备
EP4372579A1 (en) Application recommendation method and electronic device
WO2024027570A1 (zh) 界面显示方法及相关装置
US20240004515A1 (en) Application classification method, electronic device, and chip system
CN114817521A (zh) 搜索方法和电子设备
CN114518965A (zh) 一种剪贴内容处理方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23826314

Country of ref document: EP

Kind code of ref document: A1