CN103927341A - Method and device for acquiring scene information - Google Patents

Method and device for acquiring scene information Download PDF

Info

Publication number
CN103927341A
CN103927341A CN201410120012.8A CN201410120012A CN103927341A CN 103927341 A CN103927341 A CN 103927341A CN 201410120012 A CN201410120012 A CN 201410120012A CN 103927341 A CN103927341 A CN 103927341A
Authority
CN
China
Prior art keywords
gray
scene
scale map
web application
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410120012.8A
Other languages
Chinese (zh)
Other versions
CN103927341B (en
Inventor
肖鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huaduo Network Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201410120012.8A priority Critical patent/CN103927341B/en
Publication of CN103927341A publication Critical patent/CN103927341A/en
Application granted granted Critical
Publication of CN103927341B publication Critical patent/CN103927341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90324Query formulation using system suggestions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method and a device for acquiring scene information, and belongs to the field of internet communications. The method includes: acquiring application data of a web application, wherein the application data are interface images of the web application or network application data packets transmitted between the web application and a server; according to the application data, acquiring the scene information of a scene of the web application. The device comprises a first acquisition module and a second acquisition module. A web application operation platform can acquire the scene information of the scene of the web application and provides better services to the web application according to the scene information of the scene of the web application, and further kinds of the services provided to the web application are increased.

Description

A kind of method and device that obtains scene information
Technical field
The present invention relates to field of Internet communication, particularly a kind of method and device that obtains scene information.
Background technology
Fast development along with Internet communication technology, there is increasing web application, web application need to be moved in its corresponding web application operation platform, and web application also has several scenes and web application is moved and also needed to move by a kind of scene in its corresponding web application operation platform.So when user uses certain web application, user need to first download this web application operation platform, to select a kind of scene and submit to web application, the scene that then this web application is selected by user is moved in its corresponding web application operation platform.
When web application is moved in its corresponding web application platform, web application platform can provide various services to web application.Wherein, some service that web application platform provides need to use the scene information of the presently used scene of web application, yet web application operation platform cannot get the scene information of this web application institute use scenes at present, to such an extent as to cannot provide this type of service for this web application.
Summary of the invention
In order to solve the problem of prior art, the embodiment of the present invention provides a kind of method and device that obtains scene information.Described technical scheme is as follows:
On the one hand, provide a kind of method of obtaining scene information, described method comprises:
Obtain the application data of web application, the interface image that described application data is described web application or the transmitted data on network bag for transmitting between described web application and server;
According to described application data, obtain the scene information of described web application scene of living in.
Wherein, described according to described application data, obtain the scene information of described web application scene of living in, comprising:
According to the form of the contextual data bag of having stored, from described transmitted data on network bag, select contextual data bag;
According to the contextual data bag of described selection, obtain the scene information of described web application scene of living in.
Further, described according to the contextual data bag of described selection, obtain the scene information of described web application scene of living in, comprising:
From the contextual data bag of described selection, obtain scene Recognition sign;
According to described scene Recognition sign, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtain corresponding scene information;
The described scene information obtaining is defined as to the scene information of described web application scene of living in.
Wherein, described in obtain the application data of web application, comprising:
Obtain the picture size that the corresponding relation of scene image and picture size comprises;
In the image showing, according to the described picture size of obtaining, from predetermined position, start cut-away view picture in the interface of web application, obtain the interface image of described web application.
Further, described according to described application data, obtain the scene information of described web application scene of living in, comprising:
According to the described picture size of obtaining, from the corresponding relation of described scene image and picture size, obtain corresponding scene image;
Obtain the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that described the first gray-scale map is described web application, the gray-scale map of the scene image that described the second gray-scale map obtains described in being;
If described the first gray-scale map is identical with described the second gray-scale map, the scene image obtaining described in basis obtains the scene information of described web application scene of living in.
Preferably, if described the first gray-scale map is identical with described the second gray-scale map, the scene image obtaining described in basis also comprises before obtaining the scene information of described web application scene of living in:
The total number of the number of each gray-scale value comprising according to described the first gray-scale map and the gray-scale value comprising, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map.
Wherein, the total number of the number of described each gray-scale value comprising according to described the first gray-scale map and the gray-scale value comprising, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map, comprising:
The total number of the gray-scale value that the number of each gray-scale value comprising according to described the first gray-scale map and described the first gray-scale map comprise, calculates the first ratio corresponding to each gray-scale value that described the first gray-scale map comprises;
The total number of the gray-scale value that the number of each gray-scale value comprising according to described the second gray-scale map and described the second gray-scale map comprise, calculates the second ratio corresponding to each gray-scale value that described the second gray-scale map comprises;
If the first ratio of each gray-scale value is identical with the second ratio, determine that described the first gray-scale map is identical with described the second gray-scale map, otherwise, determine that described the first gray-scale map is different with described the second gray-scale map.
On the other hand, provide a kind of device that obtains scene information, described device comprises:
The first acquisition module, for obtaining the application data of web application, the interface image that described application data is described web application or the transmitted data on network bag for transmitting between described web application and server;
The second acquisition module, for according to described application data, obtains the scene information of described web application scene of living in.
Wherein, described the second acquisition module comprises:
Selected cell for according to the form of the contextual data bag of having stored, is selected contextual data bag from described transmitted data on network bag;
The first acquiring unit, for according to the contextual data bag of described selection, obtains the scene information of described web application scene of living in.
Further, described selected cell comprises:
First obtains subelement, for the contextual data bag from described selection, obtains scene Recognition sign;
Second obtains subelement, for according to described scene Recognition sign, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtains corresponding scene information;
First determines subelement, for the described scene information obtaining being defined as to the scene information of described web application scene of living in.
Wherein, described the first acquisition module comprises:
Second acquisition unit, the picture size comprising for obtaining the corresponding relation of scene image and picture size;
Interception unit, for the image showing in the interface of web application, according to the described picture size of obtaining, starts cut-away view picture from predetermined position, obtains the interface image of described web application.
Wherein, described the second acquisition module comprises:
The 3rd acquiring unit for the picture size of obtaining described in basis, obtains corresponding scene image from the corresponding relation of described scene image and picture size;
The 4th acquiring unit, for obtaining the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that described the first gray-scale map is described web application, the gray-scale map of the scene image that described the second gray-scale map obtains described in being;
The 5th acquiring unit, if identical with described the second gray-scale map for described the first gray-scale map, the scene image obtaining described in basis obtains the scene information of described web application scene of living in.
Preferably, described the second acquisition module also comprises:
Judging unit, for the number of each gray-scale value and the total number of the gray-scale value comprising comprising according to described the first gray-scale map, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map.
Wherein, described judging unit comprises:
The first computation subunit, the number of each gray-scale value and the total number of the gray-scale value that described the first gray-scale map comprises for comprising according to described the first gray-scale map, calculate the first ratio corresponding to each gray-scale value that described the first gray-scale map comprises;
The second computation subunit, the number of each gray-scale value and the total number of the gray-scale value that described the second gray-scale map comprises for comprising according to described the second gray-scale map, calculate the second ratio corresponding to each gray-scale value that described the second gray-scale map comprises;
Second determines subelement, if identical with the second ratio for the first ratio of each gray-scale value, determines that described the first gray-scale map is identical with described the second gray-scale map, otherwise, determine that described the first gray-scale map is different with described the second gray-scale map.
In embodiments of the present invention, obtain the application data of web application, the interface image that this application data is this web application or the transmitted data on network bag for transmitting between this web application and server.So, web application operation platform can get the scene information of this web application scene of living according to this application data, according to the scene information of this web application scene of living in, for this web application provides better service, and then increased the kind that service is provided for this web application.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing of required use during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention one provides;
Fig. 2 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention two provides;
Fig. 3 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention three provides;
Fig. 4 is a kind of apparatus structure schematic diagram that obtains scene information that the embodiment of the present invention four provides.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment mono-
Fig. 1 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention provides, and referring to Fig. 1, the method comprises:
Step 101: obtain the application data of web application, the interface image that this application data is this web application or the transmitted data on network bag for transmitting between this web application and server;
Step 102: according to this application data, obtain the scene information of this web application scene of living in.
Wherein, according to this application data, obtain the scene information of this web application scene of living in, comprising:
According to the form of the contextual data bag of having stored, from the transmitted data on network bag obtaining, select contextual data bag;
According to the contextual data bag of selecting, obtain the scene information of this web application scene of living in.
Further, according to the contextual data bag of selecting, obtain the scene information of this web application scene of living in, comprising:
From the contextual data bag of selecting, obtain scene Recognition sign;
According to the scene Recognition sign of obtaining, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtain corresponding scene information;
The scene information obtaining is defined as to the scene information of this web application scene of living in.
Wherein, obtain the application data of web application, comprising:
Obtain the picture size that the corresponding relation of scene image and picture size comprises;
In the image showing, according to the picture size of obtaining, from predetermined position, start cut-away view picture in the interface of web application, obtain the interface image of this web application.
Further, according to this application data, obtain the scene information of this web application scene of living in, comprising:
According to the picture size of obtaining, from the corresponding relation of scene image and picture size, obtain corresponding scene image;
Obtain the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that the first gray-scale map is this web application, the second gray-scale map is the gray-scale map of the scene image that obtains;
If the first gray-scale map is identical with the second gray-scale map, according to the scene image obtaining, obtain the scene information of this web application scene of living in.
Preferably, if the first gray-scale map is identical with the second gray-scale map, before obtaining the scene information of this web application scene of living according to the scene image obtaining, also comprise:
The total number of the number of each gray-scale value comprising according to the first gray-scale map and the gray-scale value comprising, and the total number of the number of each gray-scale value of comprising of the second gray-scale map and the gray-scale value that comprises, judge that whether the first gray-scale map and the second gray-scale map be identical.
Wherein, the total number of the number of each gray-scale value comprising according to the first gray-scale map and the gray-scale value comprising, and the total number of the number of second each gray-scale value of comprising of gray-scale map and the gray-scale value that comprises, judge that whether the first gray-scale map and the second gray-scale map be identical, comprising:
The total number of the gray-scale value that the number of each gray-scale value comprising according to the first gray-scale map and the first gray-scale map comprise, calculates the first ratio corresponding to each gray-scale value that the first gray-scale map comprises;
The total number of the gray-scale value that the number of each gray-scale value comprising according to the second gray-scale map and the second gray-scale map comprise, calculates the second ratio corresponding to each gray-scale value that the second gray-scale map comprises;
If the first ratio of each gray-scale value is identical with the second ratio, determine that the first gray-scale map is identical with the second gray-scale map, otherwise, determine that the first gray-scale map is different with the second gray-scale map.
In embodiments of the present invention, obtain the application data of web application, the interface image that this application data is this web application or the transmitted data on network bag for transmitting between this web application and server.So, web application operation platform can get the scene information of this web application scene of living according to this application data, according to the scene information of this web application scene of living in, for this web application provides better service, and then increased the kind that service is provided for this web application.
Embodiment bis-
Fig. 2 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention provides, and referring to Fig. 2, the method comprises:
Step 201: obtain the application data of web application, this application data is the transmitted data on network bag transmitting between web application and server;
Because this web application is moved in web application operation platform, so when transmission network between this web application and server transmits packet, this web application operation platform can be truncated to this transmitted data on network bag.
Step 202: according to the form of the contextual data bag of having stored, select contextual data bag from the transmitted data on network bag obtaining;
Particularly, the form of the contextual data bag of having stored and the transmitted data on network bag obtaining are carried out to the matching analysis, the transmitted data on network bag of the format match of the contextual data bag of selecting from the transmitted data on network bag obtaining and having stored, is defined as contextual data bag by the transmitted data on network bag of selection.
Because a web application can have several scenes, and contextual data bag corresponding to every kind of scene is different, so the form of contextual data bag corresponding to every kind of scene is also different, need to store the form of contextual data bag corresponding to every kind of scene.
The form of the contextual data bag of wherein, having stored is that technician obtains this web application analysis in advance.Technician analyzes this web application, and the method for form of contextual data bag of obtaining the scene of this web application comprises two kinds:
The first, because current web application developer substantially uses flash technological development web application, and be a series of flash file by the final product generating of web application of flash technological development, i.e. swf file.The form of this swf file is disclosed, and file comprises the resources such as image, audio frequency, video and code.So technician can analyze this swf file by flash analysis tool, to check the logical code in this swf file, for example this flash analysis tool is flash-decompiler analysis tool.Technician can find the code position that data transmission is relevant from the logical code of checking, and the relevant code of data transmission is analyzed, and then from this swf file, obtains the form of contextual data bag.
Further, can also from this swf file, get scene Recognition sign and crucial data identification sign.
While carrying out data communication between the second, current web application and server, the basic socket technology or http technology and server of adopting carried out data interaction, so technician can create role in this web application, this role is switched between different scenes, observe socket packet or http packet that web application is sent.Socket packet or http packet that when this role is switched between different scenes, web application is sent are analyzed, thereby get the form of the contextual data bag of this web application.
Step 203: obtain scene Recognition sign from the contextual data bag of selecting;
In embodiments of the present invention, after getting contextual data bag, can be identified at the position in contextual data bag according to scene Recognition, from the contextual data bag of selecting, obtain scene Recognition sign.Certainly, after the form of prior storage scenarios packet and the corresponding relation of scene Recognition sign, can also, according to the form of the contextual data bag of this contextual data bag coupling, from the form of this contextual data bag and the corresponding relation of scene Recognition sign, obtain corresponding scene Recognition sign.The mode that the embodiment of the present invention is obtained scene Recognition sign is not specifically limited.
Step 204: according to the scene Recognition sign of obtaining, obtain corresponding scene information from the corresponding relation of the scene Recognition sign of having stored and scene information;
Particularly, according to the scene Recognition sign of obtaining, from the corresponding relation of the scene Recognition sign of having stored and scene information, search corresponding record, and obtain corresponding scene information from the record of searching.
Step 205: the scene information that the scene information obtaining is defined as to this web application scene of living in.
Further, after web application operation platform gets the scene information of this web application scene of living in, this web application operation platform can be according to the scene information of this web application scene of living in, for this web application provides service.
In embodiments of the present invention, according to the form of the contextual data bag of having stored, the transmitted data on network bag transmitting, select contextual data bag between web application and server, from the contextual data bag of selecting, obtain scene Recognition sign.According to scene Recognition sign, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtain corresponding scene information.So, the scene information of the scene that web application operation platform just can be used according to this web application, for this web application provides better service, and then has increased the kind that service is provided for this web application.
Embodiment tri-
Fig. 3 is a kind of method flow diagram that obtains scene information that the embodiment of the present invention provides, and referring to Fig. 3, the method comprises:
Step 301: obtain the picture size that the corresponding relation of scene image and picture size comprises;
In this step, can, from article one start-of-record scanning of the corresponding relation of scene image and picture size, obtain the picture size that record comprises of scanning.Scan again next record in the corresponding relation of this scene image and picture size, from next record, obtain picture size, until get all picture sizes in the corresponding relation of this scene image and picture size.
Preferably, in embodiments of the present invention, this step can also be: article one record in the corresponding relation of scanning scene image and picture size obtains picture size from the record of scanning.Then carry out the step of following 302-306, after getting the scene information of this web application scene of living in, end operation, does not need to obtain other picture sizes in the corresponding relation of scene image and picture size again.
Step 302: in the image showing, according to the picture size of obtaining, start cut-away view picture from predetermined position in the interface of web application, obtain the interface image of this web application;
In step 301, get that the corresponding relation of scene image and picture size comprises all picture size time, step 302 can be: in the image showing in the interface of this web application, according to each picture size of obtaining, from predetermined position, start respectively cut-away view picture, obtain the interface image of this web application corresponding to each picture size.
Because web application is that the image enriching brings interactive experience to user, the scene information that relates to scene in this image is substantially all that position fixing in the interface of this web application shows, so can be in the interface of this web application cut-away view picture, according to the interface image of this web application, obtain the scene information of this web application scene of living in.
In embodiments of the present invention, predeterminated position is the location positioning that shows according to the scene information of the scene of web application.For example, when the scene information of the scene of this web application shows in the upper left corner at the interface of this web application, predeterminated position can be the upper left corner at the interface of this web application.When the scene information of the scene of this web application shows in the upper right corner at the interface of this web application, predeterminated position can be the upper right corner at the interface of this web application.
Alternatively, the also corresponding relation of storage scenarios image and picture size not, can a storage scenarios image.When storage scenarios image, the image that can show, from predeterminated position, start to intercept the image of preset image sizes in the interface of this web application, truncated picture is defined as to the interface image of this web application.Now, the size of the scene image of storage also equals preset image sizes.If while intercepting the interface image of this web application according to preset image sizes, only need intercepting once, reduced the number of times of the interface image that intercepts this web application, saved the processing time.
Step 303: according to the picture size of obtaining, obtain corresponding scene image from the corresponding relation of this scene image and picture size;
Particularly, according to the picture size of obtaining, from the corresponding relation of this scene image and picture size, search corresponding record, from the record of searching, obtain corresponding scene image.
Wherein, during all picture size that the corresponding relation that the picture size of obtaining when step 301 is this scene image and picture size comprises, this step gets all scene images that the corresponding relation of this scene image and picture size comprises.When picture size that step 301 is obtained is during for a picture size in the corresponding relation of this scene image and picture size, what this step was obtained is a scene image corresponding to this picture size.
Step 304: obtain the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that the first gray-scale map is this web application, the second gray-scale map is the gray-scale map of the scene image that obtains;
Particularly, the interface image of this web application is processed through gray processing, obtained the first gray-scale map; And the scene image obtaining is processed through gray processing, obtain the second gray-scale map.
Wherein, the method that gray processing is processed is prior art, no longer describes in detail in embodiments of the present invention.
Step 305: the total number of the number of each gray-scale value comprising according to the first gray-scale map and the gray-scale value comprising, and the total number of the number of second each gray-scale value of comprising of gray-scale map and the gray-scale value that comprises, judge that whether the first gray-scale map and the second gray-scale map be identical;
Particularly, this step can be divided into the step of (1)-(3) as follows, comprising:
(1), the number of each gray-scale value and the total number of the gray-scale value that the first gray-scale map comprises that according to the first gray-scale map, comprise, calculate the first ratio corresponding to each gray-scale value that the first gray-scale map comprises;
Particularly, the number of each gray-scale value of comprising of statistics the first gray-scale map, and the total number of the gray-scale value that comprises of statistics the first gray-scale map.The total number of the gray-scale value that the number of each gray-scale value that the first gray-scale map is comprised comprises divided by the first gray-scale map, obtains the first ratio corresponding to each gray-scale value that the first gray-scale map comprises.
(2), the number of each gray-scale value and the total number of the gray-scale value that the second gray-scale map comprises that according to the second gray-scale map, comprise, calculate the second ratio corresponding to each gray-scale value that the second gray-scale map comprises;
Particularly, the number of each gray-scale value of comprising of statistics the second gray-scale map, and the total number of the gray-scale value that comprises of statistics the second gray-scale map.The total number of the gray-scale value that the number of each gray-scale value that the second gray-scale map is comprised comprises divided by the second gray-scale map, obtains the second ratio corresponding to each gray-scale value that the second gray-scale map comprises.
(3) if the first ratio of each gray-scale value is identical with the second ratio, determine that the first gray-scale map is identical with the second gray-scale map, otherwise, determine that the first gray-scale map is different with the second gray-scale map.
Particularly, the first ratio of each gray-scale value and the second ratio are compared, if the first ratio of each gray-scale value is identical with the second ratio, determine that the first gray-scale map is identical with the second gray-scale map, otherwise, determine that the first gray-scale map is different from the second gray-scale map.
Further, while there is not certain gray-scale value in the first gray-scale map in the second gray-scale map, if when the first ratio corresponding to this gray-scale value is less than threshold value, can ignore this gray-scale value.When if the first ratio that this gray-scale value is corresponding is more than or equal to this threshold value, can directly determine that the first gray-scale map is different from the second gray-scale map.Certainly, while there is not certain gray-scale value in the second gray-scale map in the first gray-scale map, if when the second ratio corresponding to this gray-scale value is less than threshold value, can ignore this gray-scale value.When if the second ratio that this gray-scale value is corresponding is more than or equal to this threshold value, can directly determine that the first gray-scale map is different from the second gray-scale map.
Alternatively, when the scene information of web application scene of living in is the fixed position being presented at word in the interface of this web application, can from the first gray-scale map, extract the word that the first gray-scale map comprises, and from the second gray-scale map, extract the word that the second gray-scale map comprises.The word that the word that the first gray-scale map is comprised and the second gray-scale map comprise compares, if both are identical, determines that the first gray-scale map is identical with the second gray-scale map, otherwise, determine that the first gray-scale map is different from the second gray-scale map.
Step 306: if the first gray-scale map is identical with the second gray-scale map, obtain the scene information of this web application scene of living according to the scene image obtaining.
Particularly, if the first gray-scale map is identical with the second gray-scale map, according to the scene image that obtains, from the corresponding relation of the scene image stored and scene information, obtain corresponding scene information, the scene information obtaining is defined as to the scene information of this web application scene of living in.
Alternatively, in order to save storage space, can storage scenarios the corresponding relation of sign and scene information.When the first gray-scale map is with the second gray-scale map when identical, can from the first gray-scale map or the second gray-scale map, obtain scene identity, according to the scene identity of obtaining, from the corresponding relation of the scene identity of having stored and scene information, obtain corresponding scene information, the scene information obtaining is defined as to the scene information of this web application scene of living in.
In embodiments of the present invention, the picture size comprising according to the corresponding relation of scene image and picture size, the interface image of intercepting page application the image showing in the interface of this web application.According to this picture size, from the corresponding relation of scene image and picture size, obtain corresponding scene image.The interface image of the web application of the scene image obtaining and intercepting is compared, if both are identical, can obtain according to the scene image obtaining the scene information of this web application scene of living in.So, the scene information of the scene that web application operation platform just can be used according to this web application, for this web application provides better service, and then has increased the kind that service is provided for this web application.
Embodiment tetra-
Fig. 4 is a kind of apparatus structure schematic diagram that obtains scene information that the embodiment of the present invention provides, and referring to Fig. 4, this device comprises:
The first acquisition module 401, for obtaining the application data of web application, the interface image that this application data is web application or the transmitted data on network bag for transmitting between this web application and server;
The second acquisition module 402, for according to application data, obtains the scene information of this web application scene of living in.
Wherein, the second acquisition module 402 comprises:
Selected cell for according to the form of the contextual data bag of having stored, is selected contextual data bag from the transmitted data on network bag obtaining;
The first acquiring unit, for according to the contextual data bag of selecting, obtains the scene information of this web application scene of living in.
Further, selected cell comprises:
First obtains subelement, for the contextual data bag from selecting, obtains scene Recognition sign;
Second obtains subelement, for according to the scene Recognition sign of obtaining, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtains corresponding scene information;
First determines subelement, for the scene information obtaining being defined as to the scene information of this web application scene of living in.
Wherein, the first acquisition module 401 comprises:
Second acquisition unit, the picture size comprising for obtaining the corresponding relation of scene image and picture size;
Interception unit, for the image showing in the interface of web application, according to the picture size of obtaining, starts cut-away view picture from predetermined position, obtains the interface image of this web application.
Wherein, the second acquisition module 402 comprises:
The 3rd acquiring unit for according to the picture size obtained, obtains corresponding scene image from the corresponding relation of scene image and picture size;
The 4th acquiring unit, for obtaining the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that the first gray-scale map is this web application, the second gray-scale map is the gray-scale map of the scene image that obtains;
The 5th acquiring unit, if identical with the second gray-scale map for the first gray-scale map, the scene image that basis is obtained obtains the scene information of this web application scene of living in.
Preferably, the second acquisition module 402 also comprises:
Judging unit, for the number of each gray-scale value and the total number of the gray-scale value comprising comprising according to the first gray-scale map, and the total number of the number of second each gray-scale value of comprising of gray-scale map and the gray-scale value that comprises, judge that whether the first gray-scale map and the second gray-scale map be identical.
Wherein, judging unit comprises:
The first computation subunit, for the number of each gray-scale value and the total number of the gray-scale value that the first gray-scale map comprises comprising according to the first gray-scale map, calculates the first ratio corresponding to each gray-scale value that the first gray-scale map comprises;
The second computation subunit, for the number of each gray-scale value and the total number of the gray-scale value that the second gray-scale map comprises comprising according to the second gray-scale map, calculates the second ratio corresponding to each gray-scale value that the second gray-scale map comprises;
Second determines subelement, if identical with the second ratio for the first ratio of each gray-scale value, determines that the first gray-scale map is identical with the second gray-scale map, otherwise, determine that the first gray-scale map is different with the second gray-scale map.
In embodiments of the present invention, obtain the application data of web application, the interface image that this application data is this web application or the transmitted data on network bag for transmitting between this web application and server.So, web application operation platform can get the scene information of this web application scene of living according to this application data, according to the scene information of this web application scene of living in, can be for this web application provides better service, and then increased the kind that service is provided for this web application.
It should be noted that: the device that obtains scene information that above-described embodiment provides is when obtaining scene information, only the division with above-mentioned each functional module is illustrated, in practical application, can above-mentioned functions be distributed and by different functional modules, completed as required, the inner structure that is about to device is divided into different functional modules, to complete all or part of function described above.In addition, the device that obtains scene information that above-described embodiment provides belongs to same design with the embodiment of the method for obtaining scene information, and its specific implementation process refers to embodiment of the method, repeats no more here.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step that realizes above-described embodiment can complete by hardware, also can come the hardware that instruction is relevant to complete by program, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium of mentioning can be ROM (read-only memory), disk or CD etc.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (14)

1. a method of obtaining scene information, is characterized in that, described method comprises:
Obtain the application data of web application, the interface image that described application data is described web application or the transmitted data on network bag for transmitting between described web application and server;
According to described application data, obtain the scene information of described web application scene of living in.
2. the method for claim 1, is characterized in that, described according to described application data, obtains the scene information of described web application scene of living in, comprising:
According to the form of the contextual data bag of having stored, from described transmitted data on network bag, select contextual data bag;
According to the contextual data bag of described selection, obtain the scene information of described web application scene of living in.
3. method as claimed in claim 2, is characterized in that, described according to the contextual data bag of described selection, obtains the scene information of described web application scene of living in, comprising:
From the contextual data bag of described selection, obtain scene Recognition sign;
According to described scene Recognition sign, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtain corresponding scene information;
The described scene information obtaining is defined as to the scene information of described web application scene of living in.
4. the method for claim 1, is characterized in that, described in obtain the application data of web application, comprising:
Obtain the picture size that the corresponding relation of scene image and picture size comprises;
In the image showing, according to the described picture size of obtaining, from predetermined position, start cut-away view picture in the interface of web application, obtain the interface image of described web application.
5. method as claimed in claim 4, is characterized in that, described according to described application data, obtains the scene information of described web application scene of living in, comprising:
According to the described picture size of obtaining, from the corresponding relation of described scene image and picture size, obtain corresponding scene image;
Obtain the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that described the first gray-scale map is described web application, the gray-scale map of the scene image that described the second gray-scale map obtains described in being;
If described the first gray-scale map is identical with described the second gray-scale map, the scene image obtaining described in basis obtains the scene information of described web application scene of living in.
6. method as claimed in claim 5, is characterized in that, if described the first gray-scale map is identical with described the second gray-scale map, the scene image obtaining described in basis also comprises before obtaining the scene information of described web application scene of living in:
The total number of the number of each gray-scale value comprising according to described the first gray-scale map and the gray-scale value comprising, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map.
7. method as claimed in claim 6, it is characterized in that, the total number of the number of described each gray-scale value comprising according to described the first gray-scale map and the gray-scale value comprising, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map, comprising:
The total number of the gray-scale value that the number of each gray-scale value comprising according to described the first gray-scale map and described the first gray-scale map comprise, calculates the first ratio corresponding to each gray-scale value that described the first gray-scale map comprises;
The total number of the gray-scale value that the number of each gray-scale value comprising according to described the second gray-scale map and described the second gray-scale map comprise, calculates the second ratio corresponding to each gray-scale value that described the second gray-scale map comprises;
If the first ratio of each gray-scale value is identical with the second ratio, determine that described the first gray-scale map is identical with described the second gray-scale map, otherwise, determine that described the first gray-scale map is different with described the second gray-scale map.
8. a device that obtains scene information, is characterized in that, described device comprises:
The first acquisition module, for obtaining the application data of web application, the interface image that described application data is described web application or the transmitted data on network bag for transmitting between described web application and server;
The second acquisition module, for according to described application data, obtains the scene information of described web application scene of living in.
9. device as claimed in claim 8, is characterized in that, described the second acquisition module comprises:
Selected cell for according to the form of the contextual data bag of having stored, is selected contextual data bag from described transmitted data on network bag;
The first acquiring unit, for according to the contextual data bag of described selection, obtains the scene information of described web application scene of living in.
10. device as claimed in claim 9, is characterized in that, described selected cell comprises:
First obtains subelement, for the contextual data bag from described selection, obtains scene Recognition sign;
Second obtains subelement, for according to described scene Recognition sign, from the corresponding relation of the scene Recognition sign of having stored and scene information, obtains corresponding scene information;
First determines subelement, for the described scene information obtaining being defined as to the scene information of described web application scene of living in.
11. devices as claimed in claim 8, is characterized in that, described the first acquisition module comprises:
Second acquisition unit, the picture size comprising for obtaining the corresponding relation of scene image and picture size;
Interception unit, for the image showing in the interface of web application, according to the described picture size of obtaining, starts cut-away view picture from predetermined position, obtains the interface image of described web application.
12. devices as claimed in claim 11, is characterized in that, described the second acquisition module comprises:
The 3rd acquiring unit for the picture size of obtaining described in basis, obtains corresponding scene image from the corresponding relation of described scene image and picture size;
The 4th acquiring unit, for obtaining the first gray-scale map and the second gray-scale map, the gray-scale map of the interface image that described the first gray-scale map is described web application, the gray-scale map of the scene image that described the second gray-scale map obtains described in being;
The 5th acquiring unit, if identical with described the second gray-scale map for described the first gray-scale map, the scene image obtaining described in basis obtains the scene information of described web application scene of living in.
13. devices as claimed in claim 12, is characterized in that, described the second acquisition module also comprises:
Judging unit, for the number of each gray-scale value and the total number of the gray-scale value comprising comprising according to described the first gray-scale map, and the total number of the number of each gray-scale value of comprising of described the second gray-scale map and the gray-scale value that comprises, judge that whether described the first gray-scale map is identical with described the second gray-scale map.
14. devices as claimed in claim 13, is characterized in that, described judging unit comprises:
The first computation subunit, the number of each gray-scale value and the total number of the gray-scale value that described the first gray-scale map comprises for comprising according to described the first gray-scale map, calculate the first ratio corresponding to each gray-scale value that described the first gray-scale map comprises;
The second computation subunit, the number of each gray-scale value and the total number of the gray-scale value that described the second gray-scale map comprises for comprising according to described the second gray-scale map, calculate the second ratio corresponding to each gray-scale value that described the second gray-scale map comprises;
Second determines subelement, if identical with the second ratio for the first ratio of each gray-scale value, determines that described the first gray-scale map is identical with described the second gray-scale map, otherwise, determine that described the first gray-scale map is different with described the second gray-scale map.
CN201410120012.8A 2014-03-27 2014-03-27 A kind of method and device for obtaining scene information Active CN103927341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410120012.8A CN103927341B (en) 2014-03-27 2014-03-27 A kind of method and device for obtaining scene information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410120012.8A CN103927341B (en) 2014-03-27 2014-03-27 A kind of method and device for obtaining scene information

Publications (2)

Publication Number Publication Date
CN103927341A true CN103927341A (en) 2014-07-16
CN103927341B CN103927341B (en) 2017-12-26

Family

ID=51145562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410120012.8A Active CN103927341B (en) 2014-03-27 2014-03-27 A kind of method and device for obtaining scene information

Country Status (1)

Country Link
CN (1) CN103927341B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956170A (en) * 2016-05-20 2016-09-21 微鲸科技有限公司 Real-time scene information embedding method, and scene realization system and method
CN111125603A (en) * 2019-12-27 2020-05-08 百度时代网络技术(北京)有限公司 Webpage scene recognition method and device, electronic equipment and storage medium
CN112973114A (en) * 2021-04-15 2021-06-18 深圳豹亮科技有限公司 Game scene model construction method and terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102053903A (en) * 2009-10-30 2011-05-11 国际商业机器公司 Method and system for storing and querying scene data for on-line operation programs
CN102323939A (en) * 2011-08-31 2012-01-18 百度在线网络技术(北京)有限公司 Method and device for determining background information of rearranged elements in page rearranging process
US20130238975A1 (en) * 2012-03-12 2013-09-12 Apple Inc. Off-line presentation of web content
CN103544271A (en) * 2013-10-18 2014-01-29 北京奇虎科技有限公司 Picture processing window loading method and device for browsers
CN103631866A (en) * 2013-11-01 2014-03-12 北京奇虎科技有限公司 Webpage display method and browser

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102053903A (en) * 2009-10-30 2011-05-11 国际商业机器公司 Method and system for storing and querying scene data for on-line operation programs
CN102323939A (en) * 2011-08-31 2012-01-18 百度在线网络技术(北京)有限公司 Method and device for determining background information of rearranged elements in page rearranging process
US20130238975A1 (en) * 2012-03-12 2013-09-12 Apple Inc. Off-line presentation of web content
CN103544271A (en) * 2013-10-18 2014-01-29 北京奇虎科技有限公司 Picture processing window loading method and device for browsers
CN103631866A (en) * 2013-11-01 2014-03-12 北京奇虎科技有限公司 Webpage display method and browser

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956170A (en) * 2016-05-20 2016-09-21 微鲸科技有限公司 Real-time scene information embedding method, and scene realization system and method
CN105956170B (en) * 2016-05-20 2019-07-19 微鲸科技有限公司 Real-time scene information embedding method, Scene realization system and implementation method
CN111125603A (en) * 2019-12-27 2020-05-08 百度时代网络技术(北京)有限公司 Webpage scene recognition method and device, electronic equipment and storage medium
CN111125603B (en) * 2019-12-27 2023-06-27 百度时代网络技术(北京)有限公司 Webpage scene recognition method and device, electronic equipment and storage medium
CN112973114A (en) * 2021-04-15 2021-06-18 深圳豹亮科技有限公司 Game scene model construction method and terminal

Also Published As

Publication number Publication date
CN103927341B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN107223246B (en) Image labeling method and device and electronic equipment
US11270099B2 (en) Method and apparatus for generating facial feature
US10929600B2 (en) Method and apparatus for identifying type of text information, storage medium, and electronic apparatus
CN103606310B (en) Teaching method and system
CN105095919A (en) Image recognition method and image recognition device
JP2017513090A (en) Object search method and apparatus
CN105357475A (en) Video playing method and device
US20220067888A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN106210908A (en) A kind of advertisement sending method and device
CN103927341A (en) Method and device for acquiring scene information
CN110135512B (en) Picture identification method, equipment, storage medium and device
CN104813610A (en) Providing multiple content items for display on multiple devices
CN108965905B (en) Live broadcast data stream pushing and method and device for providing and acquiring stream pushing address
CN110415318B (en) Image processing method and device
CN110414322B (en) Method, device, equipment and storage medium for extracting picture
CN108834171B (en) Image method and device
CN107330069B (en) Multimedia data processing method and device, server and storage medium
CN115982330A (en) Model pre-training method, model training method, data processing method and device thereof
CN113489791B (en) Image uploading method, image processing method and related devices
CN106484722A (en) A kind of image procossing and searching method, device and system
CN113343857B (en) Labeling method, labeling device, storage medium and electronic device
CN109559313B (en) Image processing method, medium, device and computing equipment
CN114039969A (en) Data transmission method and device
CN111259182A (en) Method and device for searching screen shot image
CN110673727A (en) AR remote assistance method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 511446 Guangzhou City, Guangdong Province, Panyu District, South Village, Huambo Business District Wanda Plaza, block B1, floor 28

Applicant after: Guangzhou Huaduo Network Technology Co., Ltd.

Address before: 510655, Guangzhou, Whampoa Avenue, No. 2, creative industrial park, building 3-08,

Applicant before: Guangzhou Huaduo Network Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant