CN111488899B - Feature extraction method, device, equipment and readable storage medium - Google Patents

Feature extraction method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN111488899B
CN111488899B CN201910087341.XA CN201910087341A CN111488899B CN 111488899 B CN111488899 B CN 111488899B CN 201910087341 A CN201910087341 A CN 201910087341A CN 111488899 B CN111488899 B CN 111488899B
Authority
CN
China
Prior art keywords
sequence
analyzed
features
event
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910087341.XA
Other languages
Chinese (zh)
Other versions
CN111488899A (en
Inventor
邢金彪
王辉
姜伟浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910087341.XA priority Critical patent/CN111488899B/en
Publication of CN111488899A publication Critical patent/CN111488899A/en
Application granted granted Critical
Publication of CN111488899B publication Critical patent/CN111488899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a feature extraction method, a device, equipment and a readable storage medium, and belongs to the field of data mining. The method comprises the following steps: acquiring a target behavior event of an object to be analyzed, wherein the target behavior event is recorded information about the object to be analyzed, which is recorded in a business system; preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result; and extracting characteristics of the target behavior event according to the preprocessing result, wherein the characteristics comprise at least one of time characteristics, space characteristics, sequence characteristics and advanced characteristics. The method has the advantages that the characteristics of the target behavior event are extracted based on the information to be analyzed obtained after the target behavior event of the object to be analyzed is preprocessed, so that the characteristic extraction of the behavior event is realized, the extracted characteristics are wider in coverage and more universal, the requirements of most of the characteristics required to be extracted from the event can be met, and the application range of the extracted characteristics is expanded.

Description

Feature extraction method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of data mining, and in particular, to a feature extraction method, apparatus, device, and readable storage medium.
Background
With the development of artificial intelligence algorithms, feature extraction is becoming more and more important as a basis for artificial intelligence algorithms. The feature extraction mainly refers to a process and a method for extracting features, and algorithm researches such as personnel image drawing, classification, clustering, anomaly detection, sequence analysis, predictive analysis and the like can be performed based on the extracted feature information. Therefore, how to perform feature extraction is a very important issue in the field of data mining technology.
The feature extraction method provided by the related technology comprises feature extraction based on a user operation log, feature extraction based on a user internet surfing behavior record, feature extraction based on text information and the like. If the address text subjected to word segmentation is determined, according to the preset word capturing number and the preset word skipping number, the words are captured from the address text subjected to word segmentation, and the characteristic word strings of the address text subjected to word segmentation are formed.
However, the related technologies are all customized feature extraction based on specific service scenes, and the feature extraction modes are limited to the specific service scenes, so that the method is poor in universality and narrow in feature coverage, and cannot well describe and characterize the target to be analyzed, is high in limitation, and further limits the application range of the extracted features.
Disclosure of Invention
The embodiment of the invention provides a feature extraction method, which aims to solve the problems that the feature extraction is limited to texts or specific business application scenes in the related technology, the limitation is larger, and the application range of the extracted features is further limited. The technical scheme is as follows:
in one aspect, a feature extraction method is provided, the method comprising:
acquiring a target behavior event of an object to be analyzed, wherein the target behavior event is recorded information about the object to be analyzed, which is recorded in a business system;
preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result;
and extracting the characteristics of the target behavior event of the object to be analyzed according to the preprocessing result, wherein the characteristics of the target behavior event comprise at least one of time characteristics, space characteristics, sequence characteristics and advanced characteristics.
Optionally, the preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result includes:
carrying out data cleaning on the recorded information of the object to be analyzed;
and carrying out data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result.
Optionally, the data cleaning of the recorded information of the object to be analyzed includes:
Deleting useless data in the record information of the object to be analyzed, and unifying data formats;
performing data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result, wherein the method comprises the following steps:
searching the data in the cleaned recorded information, selecting a reference time span to intercept the recorded information of the object to be analyzed, and taking the intercepted result as a preprocessing result for preprocessing the recorded information of the object to be analyzed.
Optionally, the extracting the feature of the target behavior event of the object to be analyzed according to the preprocessing result includes:
determining single-type behavior events and multi-type behavior events in the target behavior events according to the preprocessing result;
acquiring the characteristics of the single-type behavior event and the characteristics of the multi-type behavior event;
extracting features of the target behavioral event based on the features of the single type behavioral event and the features of the multiple type behavioral event.
Optionally, the characteristics of the target behavioral event include a sequence characteristic, the method further comprising:
before extracting the sequence features of the target behavior event, extracting the sequence of the object to be analyzed according to the preprocessing result, and cutting the sequence of the object to be analyzed;
The extracting the sequence feature of the target behavior event comprises the following steps:
and extracting the sequence characteristics of the target behavior event based on the segmented sequence.
Optionally, the splitting the sequence of the object to be analyzed includes:
cutting the sequence of the object to be analyzed according to a fixed reference time window;
or cutting the sequence of the object to be analyzed according to the reference time interval threshold value and the fragments.
Optionally, the time feature is a feature extracted according to the time information, and includes at least one of a frequency statistics feature, a time interval feature and a distribution feature of different level granularities.
Optionally, the spatial feature is a feature extracted according to the location information, and includes at least one of a frequency statistic feature and a distribution feature of different level granularity, where the frequency statistic feature of different level granularity includes the number of times of the places of different levels involved, and the distribution feature of different level granularity includes the number of places of different levels involved and the place duty ratio of different levels.
Optionally, the sequence features are features extracted from a sequence composed of behavioral event records of the object to be analyzed, including at least one of sequence segment numbers, temporal features and spatial features involved in different sequence segments or between segments.
Optionally, the advanced behavior feature is a feature extracted according to a reference algorithm, where the reference algorithm is dynamically acquired according to an application scenario, and the reference algorithm includes at least one of a behavior sequence pattern feature, an embedded feature, and an abnormal score feature.
There is also provided a feature extraction apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target behavior event of an object to be analyzed, wherein the target behavior event is recorded information about the object to be analyzed, which is recorded in a business system;
the preprocessing module is used for preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result;
and the extraction module is used for extracting the characteristics of the target behavior event of the object to be analyzed according to the preprocessing result, wherein the characteristics of the target behavior event comprise at least one of time characteristics, space characteristics, sequence characteristics and advanced characteristics.
Optionally, the preprocessing module includes:
the cleaning unit is used for cleaning the data of the recorded information of the object to be analyzed;
and the searching unit is used for searching data based on the cleaned record information and acquiring a preprocessing result based on the searching result.
Optionally, the cleaning unit is configured to delete useless data in the record information of the object to be analyzed, and unify a data format;
The exploration unit is used for exploring the data in the cleaned recorded information, selecting the recorded information of the object to be analyzed according to the reference time span, and taking the interception result as a preprocessing result for preprocessing the recorded information of the object to be analyzed.
Optionally, the extracting module is configured to determine a single-type behavior event and a multi-type behavior event in the target behavior events according to the preprocessing result; acquiring the characteristics of the single-type behavior event and the characteristics of the multi-type behavior event; extracting features of the target behavioral event based on the features of the single type behavioral event and the features of the multiple type behavioral event.
Optionally, the characteristics of the target behavioral event include a sequence characteristic, and the apparatus further includes:
the segmentation module is used for extracting the sequence of the object to be analyzed according to the preprocessing result before extracting the sequence characteristics of the target behavior event, and segmenting the sequence of the object to be analyzed;
the extraction module is used for extracting the sequence characteristics of the target behavior event based on the segmented sequence.
Optionally, the segmentation module is configured to segment the sequence of the object to be analyzed according to a fixed reference time window; or cutting the sequence of the object to be analyzed according to the reference time interval threshold value and the fragments.
Optionally, the time feature is a feature extracted according to the time information, and includes at least one of a frequency statistics feature, a time interval feature and a distribution feature of different level granularities.
Optionally, the spatial feature is a feature extracted according to the location information, and includes at least one of a frequency statistic feature and a distribution feature of different level granularity, where the frequency statistic feature of different level granularity includes the number of times of the places of different levels involved, and the distribution feature of different level granularity includes the number of places of different levels involved and the place duty ratio of different levels.
Optionally, the sequence features are features extracted from a sequence composed of behavioral event records of the object to be analyzed, including at least one of sequence segment numbers, temporal features and spatial features involved in different sequence segments or between segments.
Optionally, the advanced behavior feature is a feature extracted according to a reference algorithm, where the reference algorithm is dynamically acquired according to an application scenario, and the reference algorithm includes at least one of a behavior sequence pattern feature, an embedded feature, and an abnormal score feature.
In one aspect, there is provided a feature extraction apparatus comprising a processor and a memory having stored therein at least one instruction displayed and executed by the processor to implement a feature extraction method as described in any one of the above.
In one aspect, a computer readable storage medium is provided, wherein at least one instruction is stored in the storage medium, the instruction being displayed and executed by a processor to implement a feature extraction method as described in any one of the above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the method comprises the steps of acquiring a target behavior event of an object to be analyzed, extracting the characteristics of the target behavior event after preprocessing the recorded information of the object to be analyzed, and realizing the characteristic extraction of the behavior event, so that the extracted characteristics have wider coverage and are more universal, thereby meeting the requirements of most of the characteristics required to be extracted from the event and expanding the application range of the extracted characteristics.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a feature extraction method according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a behavior event provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a behavioral event feature extraction process according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a feature extraction device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a feature extraction device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a feature extraction device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
The feature extraction is the basis of an artificial intelligence algorithm, and is mainly used for extracting the features of the object to be analyzed and characterizing the object to be analyzed. Taking an object to be analyzed as an example, the extracted characteristics mainly comprise age, native, hobbies, travel frequency, shopping place type and the like of the person. Feature extraction mainly refers to the process and method of extracting the above features. And (3) carrying out algorithm researches such as personnel image drawing, classification, clustering, anomaly detection, sequence analysis, predictive analysis and the like on the basis of the extracted characteristic information. It follows that the range of applications of feature extraction is becoming wider and wider.
In this regard, the embodiment of the present invention provides a feature extraction method, which may be applied in the implementation environment shown in fig. 1. In fig. 1, at least one terminal 11 and a server 12 are included, and the terminal 11 may be in communication connection with the server 12. The server 12 may implement the feature extraction method and transmit the extracted features to the terminal 11. Of course, the terminal 11 may also download the target behavior event of the object to be analyzed from the server 12, and the feature extraction method may be implemented by the terminal 11.
The terminal 11 may be any electronic product that can perform man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, a voice interaction or a handwriting device, for example, a PC (personal computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant, a personal digital assistant), a wearable device, a palm computer PPC (Pocket PC), a tablet computer, a smart car machine, a smart television, a smart speaker, etc.
The server 12 may be one server or may be a server cluster composed of a plurality of servers.
Those skilled in the art will appreciate that the above-described terminal 11 and server 12 are by way of example only, and that other terminals or servers, either now present or later, may be suitable for use in the present application, and are intended to be within the scope of the present application and are incorporated herein by reference.
Based on the implementation environment shown in fig. 1, referring to fig. 2, an embodiment of the present invention provides a feature extraction method, which may be applied to a terminal or a server shown in fig. 1, for example, as shown in fig. 2, and includes:
step 201, a target behavior event of an object to be analyzed is obtained, where the target behavior event is recorded information about the object to be analyzed recorded in a service system.
When the target behavior event of the object to be analyzed is acquired, the target behavior event can be acquired from a server, and the target behavior event can be represented by the same structure. Taking the structure of a behavioral event as an example, the structure of the behavioral event includes key information as shown in table 1 below, namely an event ID, an event type, an event start time, an event end time, a person ID, a place, and the like. The structure of the behavior event may be determined according to practical situations, which is not limited in the embodiment of the present invention.
TABLE 1
Optionally, the target behavioral event of the object to be analyzed includes: single-type behavior events and multi-type behavior events. The analysis of the single-type behavior event and the multi-type behavior event is to extract features based on the single-type behavior and the multi-type behavior respectively, and the structure of the behavior event is based on the structure of the behavior event described in the above table 1, and taking the personnel behavior event as an example, the behavior event can be shown in the following table 2.
TABLE 2
As in table 2, the person having person IDs 330901 and 330902 each contain two event types, respectively (01, 02) and (02, 03). Thus, taking person 330901 as an example, event features can be extracted based on a single type of behavior 01 or 02, and multiple types of event features can also be extracted based on behaviors consisting of behaviors 01 and 02.
Step 202, preprocessing the record information of the object to be analyzed to obtain a preprocessing result.
Optionally, preprocessing the record information of the object to be analyzed to obtain a preprocessing result, including:
data cleaning is carried out on the recorded information of the object to be analyzed;
and carrying out data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result.
Optionally, the data cleaning is performed on the recorded information of the object to be analyzed, including: deleting useless data in the record information of the object to be analyzed, and unifying the data format;
performing data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result, wherein the method comprises the following steps: searching the data in the cleaned recorded information, selecting a reference time span to intercept the recorded information of the object to be analyzed, and taking the intercepted result as a preprocessing result for preprocessing the recorded information of the object to be analyzed.
After the recorded information of the object to be analyzed is obtained, useless data and dirty data in the recorded information are deleted, metadata are unified, data formats, time formats and the like are unified, and the accuracy of feature extraction is higher. In addition, the data exploration is to explore the time span, the data quality and the like of the original data, and select a proper time span to intercept the data of the object to be analyzed, so as to obtain the needed record information of the object to be analyzed.
And 203, extracting the characteristics of the target behavior event of the object to be analyzed according to the preprocessing result.
Optionally, the characteristics of the target behavioral event of the object to be analyzed include, but are not limited to, at least one of temporal characteristics, spatial characteristics, sequence characteristics, and advanced characteristics.
Alternatively, in an embodiment of the present invention, the extracted features may be as shown in fig. 3.
Optionally, extracting the feature of the target behavior event of the object to be analyzed according to the preprocessing result includes:
determining single-type behavior events and multi-type behavior events in the target behavior events according to the information to be analyzed;
acquiring the characteristics of single-type behavior events and the characteristics of multi-type behavior events;
the features of the target behavioral event are obtained based on the features of the single type behavioral event and the features of the multiple types behavioral event.
In an alternative embodiment, after obtaining the required record information of the object to be analyzed, the feature extraction process may be implemented by the following sub-steps:
in step 2031, time feature calculation is performed on the required record information of the object to be analyzed, and the time feature is extracted.
Alternatively, the temporal features are statistical features and distribution features of the temporal information extracted from the recorded information by temporal feature calculation. The temporal features include at least one of frequency statistics, time interval features, and distribution features of different hierarchical granularities. For example, the time information is divided according to different levels of granularity such as hours, days, weeks and months, and corresponding time features are extracted. For example, frequency statistics for different hierarchical granularities include: daily hourly, weekly daily, monthly weekly, yearly monthly, etc.; the time interval feature includes: maximum, minimum, mean, variance, etc. of adjacent behavior intervals; the distribution characteristics of different hierarchical granularities include: the number of time points, days, weeks, months, years, etc. involved.
Step 2032, performing spatial feature calculation on the required record information of the object to be analyzed, and extracting spatial features.
For example, sites are divided by country, province, city, district, etc. of administrative districts, or are divided by different types of grades of event occurrence sites, and corresponding spatial features are extracted.
Optionally, the spatial features are statistical features and distribution features of the relevant spatial information extracted from the recorded information through spatial feature calculation. The spatial features include at least one of frequency statistics and distribution features of different hierarchical granularities.
For example, the frequency statistics of the different levels include the number of times the different levels of places are involved; the distribution characteristics of different levels comprise the number of different places involved, the place ratio of different levels and the like.
Step 2033, performing sequence feature calculation on the required record information of the object to be analyzed, and extracting sequence features.
Optionally, before extracting the sequence features of the target behavior event, extracting the sequence of the object to be analyzed according to the preprocessing result, and cutting the sequence of the object to be analyzed; correspondingly, extracting the sequence features of the target behavior event comprises the following steps: and extracting sequence features of the target behavior event based on the segmented sequence. That is, the sequence of the record information of the required object to be analyzed is acquired before the sequence feature calculation is performed, and the sequence segmentation is performed; wherein the sequence means that all recorded information related to the object to be analyzed is arranged in time increment order.
Optionally, the method for sequence slicing includes: the sequence is split into N sequence segments according to a fixed reference time window T.
Optionally, the method for sequence slicing further comprises: when the time interval between the front and back behaviors in the sequence exceeds the reference time interval threshold ≡, the sequence is segmented according to the reference time interval threshold ≡.
No matter which sequence segmentation mode is adopted, different sequence fragments are obtained after sequence segmentation is carried out.
The sequence features are the number of sequence fragments extracted from the sequence through sequence feature calculation, and the time features and the space features involved between each sequence fragment and the sequence fragments. Optionally, the sequence features include at least one of a number of sequence segments, temporal features and spatial features referred to within or between different sequence segments.
The extraction methods of the temporal features and the spatial features are the same as those in the steps 2031 and 2032, respectively.
Step 2034, performing advanced feature calculation on the required record information of the object to be analyzed, and extracting advanced behavior features.
Advanced behavioral features are features derived from a reference algorithm. The types and the number of the reference algorithms are not limited in this way, and the embodiments of the present invention can be selected according to specific application scenarios, and can also be dynamically acquired according to different application scenarios, that is, flexibly select which algorithms to use. Take the following table 3 as an example:
TABLE 3 Table 3
For example, extracting the behavior sequence characteristics of the target to be analyzed, dividing the target group into positive and negative sample sets, respectively using a PrefixSpan algorithm to calculate the sequence mode corresponding to the positive and negative sample sets, and judging whether the target to be analyzed has the sequence mode and the number of times of occurrence of the sequence mode as the sequence characteristics of the target to be analyzed.
It should be noted that, the reference numerals in steps 2031, 2032, 2033 and 2034 are only used for distinguishing, and do not limit the execution sequence. The steps 2031, 2032, 2033 and 2034 may be performed simultaneously or in any order, and the embodiment of the invention is not limited thereto, and the overall process of feature extraction described above may also be described with reference to fig. 4.
According to the method provided by the embodiment of the invention, the characteristics of the target behavior event are extracted after the target behavior event of the object to be analyzed is obtained and the pretreatment is carried out on the recorded information of the object to be analyzed, so that the characteristic extraction of the behavior event is realized, the extracted characteristics are wider in coverage and more general, the requirements of most of the characteristics required to be extracted from the event can be met, and the application range of the extracted characteristics is expanded.
Based on the same conception as the above method, an embodiment of the present invention provides a feature extraction device, see fig. 5, including:
The acquiring module 501 is configured to acquire a target behavior event of an object to be analyzed, where the target behavior event is recorded information about the object to be analyzed recorded in the service system;
the preprocessing module 502 is configured to preprocess the record information of the object to be analyzed to obtain a preprocessing result;
the extracting module 503 is configured to extract, according to the preprocessing result, a feature of a target behavioral event of the object to be analyzed, where the feature of the target behavioral event includes at least one of a temporal feature, a spatial feature, a sequence feature, and a high-level feature.
Optionally, referring to fig. 6, a pretreatment die 502 includes:
a cleaning unit 5021, configured to perform data cleaning on the record information of the object to be analyzed;
the search unit 5022 is configured to perform data search based on the cleaned record information, and obtain a preprocessing result based on the search result.
Optionally, the cleaning unit 5021 is configured to delete useless data in the record information of the object to be analyzed, and unify a data format;
the exploration unit 5022 is used for exploring the data in the cleaned recorded information, selecting a reference time span to intercept the recorded information of the object to be analyzed, and taking the intercepted result as a preprocessing result of preprocessing the recorded information of the object to be analyzed.
Optionally, the extracting module 503 is configured to determine a single-type behavior event and a multi-type behavior event in the target behavior events according to the preprocessing result; acquiring the characteristics of single-type behavior events and the characteristics of multi-type behavior events; features of the target behavioral event are extracted based on features of the single type behavioral event and features of the multiple types behavioral event.
Optionally, the features of the target behavioral event include a sequence feature, see fig. 7, the apparatus further comprising:
the segmentation module 504 is configured to extract a sequence of the object to be analyzed according to the preprocessing result before extracting the sequence feature of the target behavior event, and segment the sequence of the object to be analyzed;
an extracting module 503, configured to extract a sequence feature of the target behavior event based on the segmented sequence.
Optionally, a segmentation module 504, configured to segment the sequence of the object to be analyzed according to a fixed reference time window; or the sequence of the object to be analyzed is segmented according to the segments according to the reference time interval threshold value.
Optionally, the time features are features extracted according to the time information, including at least one of frequency statistics features, time interval features and distribution features of different hierarchical granularities.
Optionally, the spatial feature is a feature extracted according to the location information, and includes at least one of a frequency statistics feature and a distribution feature of different level granularities, the frequency statistics feature of different level granularities including the number of times of the places of different levels involved, the distribution feature of different level granularities including the number of places of different levels involved and the place duty ratio of different levels.
Optionally, the sequence features are features extracted from a sequence of behavioral event records of the object to be analyzed, including at least one of the number of sequence segments, temporal features and spatial features involved within different sequence segments or between segments.
Optionally, the advanced behavioral features are features extracted according to a reference algorithm, which is dynamically acquired according to an application scenario, including at least one of behavioral sequence pattern features, embedded features, and abnormal score features.
According to the device provided by the embodiment of the invention, the characteristics of the target behavior event are extracted after the target behavior event of the object to be analyzed is obtained and the pretreatment is carried out on the recorded information of the object to be analyzed, so that the characteristic extraction of the behavior event is realized, the extracted characteristics are wider in coverage and more general, the requirements of most of the characteristics required to be extracted from the event can be met, and the application range of the extracted characteristics is expanded.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the internal structure of the terminal is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 8, a schematic structural diagram of a terminal 800 for feature extraction according to an embodiment of the present disclosure is shown. The terminal 800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names as user terminal, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as 4 core processors, 5 core processors, and the like. The processor 801 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 801 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more disk storage terminals, flash storage terminals. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the feature extraction methods provided by the method embodiments herein.
In some embodiments, the terminal 800 may further optionally include: a peripheral terminal interface 803, and at least one peripheral terminal. The processor 801, the memory 802, and the peripheral terminal interface 803 may be connected by a bus or signal lines. The respective peripheral terminals may be connected to the peripheral terminal interface 803 via a bus, signal line, or circuit board. Specifically, the peripheral terminal includes: at least one of radio frequency circuitry 804, a touch display 805, a camera 806, audio circuitry 807, a positioning component 808, and a power supply 809.
Peripheral interface 803 may be used to connect at least one peripheral terminal associated with an I/O (Input/Output) to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral terminal interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral terminal interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication terminals via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 804 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one, providing a front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 807 for voice communication. For stereo acquisition or noise reduction purposes, a plurality of microphones may be respectively disposed at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 807 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The location component 808 is utilized to locate the current geographic location of the terminal 800 to enable navigation or LBS (Location Based Service, location-based services).
A power supply 809 is used to power the various components in the terminal 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, fingerprint sensor 817, optical sensor 817, and proximity sensor 816.
The acceleration sensor 810 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the touch display screen 805 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may collect a 3D motion of the user to the terminal 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 813 may be disposed at a side frame of the terminal 800 and/or at a lower layer of the touch display 805. When the pressure sensor 813 is disposed on a side frame of the terminal 800, a grip signal of the terminal 800 by a user may be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the touch display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 814 is used to collect a fingerprint of a user, and the processor 801 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 814 may be provided on the front, back, or side of the terminal 800. When a physical key or vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical key or vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also referred to as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually increases, the processor 801 controls the touch display 805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Referring to fig. 9, a schematic structural diagram of a server according to an embodiment of the present invention is shown, where the server may be used to implement the feature extraction method provided in the foregoing embodiment. Specifically, the present invention relates to a method for manufacturing a semiconductor device.
The server 900 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPU) 922 (e.g., one or more processors) and memory 932, one or more storage media 930 (e.g., one or more mass storage devices) storing applications 942 or data 944. Wherein the memory 932 and the storage medium 930 may be transitory or persistent. The program stored in the storage medium 930 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 922 may be arranged to communicate with a storage medium 930 to execute a series of instruction operations in the storage medium 930 on the server 900.
The server 900 may also include one or more power supplies 926, one or more wired or wireless network interfaces 950, one or more input/output interfaces 958, one or more keyboards 956, and/or one or more operating systems 941, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
The server 900 may include a memory and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, including for performing the feature extraction methods described above.
In an exemplary embodiment, there is also provided a feature extraction device that includes a processor and a memory having at least one instruction stored therein. At least one instruction is configured to be executed by one or more processors to implement any of the feature extraction methods described above.
In an exemplary embodiment, a computer readable storage medium is also provided, in which at least one instruction is stored, which when executed by a processor of a computer terminal implements any of the feature extraction methods described above.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (18)

1. A method of feature extraction, the method comprising:
acquiring a target behavior event of an object to be analyzed, wherein the target behavior event is recorded information about the object to be analyzed, recorded in a business system, the recorded information comprises a place and an event starting time, and the target behavior event comprises a single-type behavior event and a multi-type behavior event;
preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result;
extracting a sequence of the object to be analyzed according to the preprocessing result, segmenting the sequence of the object to be analyzed according to a fixed reference time window, or segmenting the sequence of the object to be analyzed according to a reference time interval value when the time interval between the front behavior and the rear behavior in the sequence exceeds a reference time interval threshold value, extracting sequence characteristics of the target behavior event based on the segmented sequence, wherein the sequence is obtained by arranging the preprocessing result according to the event starting time increment;
dividing a target group into positive and negative sample sets, respectively calculating a sequence mode corresponding to the positive and negative sample sets by using a sequence mining algorithm, and determining whether an object to be analyzed has the sequence mode and the occurrence times of the sequence mode as sequence mode characteristics of the object to be analyzed, wherein the sequence mode characteristics belong to advanced behavior characteristics, the advanced behavior characteristics are obtained by carrying out advanced characteristic calculation on the preprocessing result based on a reference algorithm, and the reference algorithm is dynamically acquired according to an application scene;
Dividing places in the preprocessing result according to administrative areas or different types of grades of places where the event occurs so as to extract space features;
extracting the time characteristics of the object to be analyzed according to the pretreatment result;
the sequence feature, the advanced behavior feature, the spatial feature and the temporal feature belong to the feature of the target behavior event, and the feature of the target behavior event is used for personnel portrayal, anomaly detection or prediction analysis.
2. The method according to claim 1, wherein preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result comprises:
carrying out data cleaning on the recorded information of the object to be analyzed;
and carrying out data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result.
3. The method according to claim 2, wherein the data cleansing of the recorded information of the object to be analyzed comprises:
deleting useless data in the record information of the object to be analyzed, and unifying data formats;
performing data exploration based on the cleaned recorded information, and acquiring a preprocessing result based on the exploration result, wherein the method comprises the following steps:
Searching the data in the cleaned recorded information, selecting a reference time span to intercept the recorded information of the object to be analyzed, and taking the intercepted result as a preprocessing result for preprocessing the recorded information of the object to be analyzed.
4. The method of claim 1, wherein extracting features of the target behavioral event of the object to be analyzed comprises:
determining single-type behavior events and multi-type behavior events in the target behavior events according to the preprocessing result;
acquiring the characteristics of the single-type behavior event and the characteristics of the multi-type behavior event;
extracting features of the target behavioral event based on the features of the single type behavioral event and the features of the multiple type behavioral event.
5. The method according to any one of claims 1-4, wherein the temporal features are features extracted from the temporal information, including at least one of frequency statistics, time interval features, and distribution features of different hierarchical granularity.
6. The method according to any one of claims 1-4, wherein the spatial features are features extracted from the location information, including at least one of frequency statistics and distribution features of different hierarchical granularity, the frequency statistics of different hierarchical granularity including the number of times of referring to different hierarchical sites, and the distribution features of different hierarchical granularity including the number of referring to different sites and site duty cycle of different hierarchical sites.
7. The method according to any one of claims 1-4, wherein the sequence features are features extracted from a sequence of behavioral event records of the object to be analyzed, including at least one of sequence segment numbers, temporal features and spatial features involved in or between different sequence segments.
8. The method of any of claims 1-4, wherein the advanced behavioral features are features extracted from reference algorithms that are dynamically acquired from the application scenario, including at least one of behavioral sequence pattern features, embedded features, and outlier score features.
9. A feature extraction apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring target behavior events of an object to be analyzed, the target behavior events are recorded information about the object to be analyzed, recorded in a business system, the recorded information comprises a place and event starting time, and the target behavior events comprise single-type behavior events and multi-type behavior events;
the preprocessing module is used for preprocessing the recorded information of the object to be analyzed to obtain a preprocessing result;
the segmentation module is used for extracting the sequence of the object to be analyzed according to the preprocessing result, segmenting the sequence of the object to be analyzed according to a fixed reference time window, or segmenting the sequence of the object to be analyzed according to a reference time interval value when the time interval between the front behavior and the rear behavior in the sequence exceeds a reference time interval threshold value, wherein the sequence is obtained by arranging the preprocessing result according to the event starting time in an increasing order;
The extraction module is used for extracting the sequence characteristics of the target behavior event based on the segmented sequence;
the extraction module is further used for dividing a target group into positive and negative sample sets, calculating sequence modes corresponding to the positive and negative sample sets by using a sequence mining algorithm, and determining whether an object to be analyzed has the sequence modes and the occurrence times of the sequence modes as sequence mode characteristics of the object to be analyzed, wherein the sequence mode characteristics belong to advanced behavior characteristics, the advanced behavior characteristics are obtained by carrying out advanced characteristic calculation on the preprocessing result based on a reference algorithm, and the reference algorithm is dynamically obtained according to an application scene;
the extraction module is used for dividing the places in the preprocessing result according to administrative areas or different types of grades of the places where the events occur so as to extract space characteristics; and is also used for extracting the time characteristics of the object to be analyzed according to the preprocessing result,
the sequence feature, the advanced behavior feature, the spatial feature and the temporal feature belong to the feature of the target behavior event, and the feature of the target behavior event is used for personnel portrayal, anomaly detection or prediction analysis.
10. The apparatus of claim 9, wherein the preprocessing module comprises:
the cleaning unit is used for cleaning the data of the recorded information of the object to be analyzed;
and the searching unit is used for searching data based on the cleaned record information and acquiring a preprocessing result based on the searching result.
11. The apparatus according to claim 10, wherein the cleansing unit is configured to delete useless data in the recorded information of the object to be analyzed, and unify data formats;
the exploration unit is used for exploring the data in the cleaned recorded information, selecting the recorded information of the object to be analyzed according to the reference time span, and taking the interception result as a preprocessing result for preprocessing the recorded information of the object to be analyzed.
12. The apparatus of claim 9, wherein the extraction module is configured to determine a single type of behavior event and a multiple type of behavior event in the target behavior event according to the preprocessing result; acquiring the characteristics of the single-type behavior event and the characteristics of the multi-type behavior event; extracting features of the target behavioral event based on the features of the single type behavioral event and the features of the multiple type behavioral event.
13. The apparatus according to any one of claims 9-12, wherein the temporal features are features extracted from the temporal information, including at least one of frequency statistics, time interval features, and distribution features of different hierarchical granularity.
14. The apparatus according to any one of claims 9-12, wherein the spatial signature is a signature extracted from the location information, comprising at least one of a frequency statistics signature and a distribution signature of different hierarchical granularity, the frequency statistics signature of different hierarchical granularity comprising a number of times of referring to different hierarchical sites, the distribution signature of different hierarchical granularity comprising a number of referring to different sites and a site duty cycle of different hierarchical sites.
15. The apparatus according to any of claims 9-12, wherein the sequence features are features extracted from a sequence of behavioral event records of the object to be analyzed, including at least one of sequence segment numbers, temporal features and spatial features involved in or between different sequence segments.
16. The apparatus according to any one of claims 9-12, wherein the advanced behavioral features are features extracted according to a reference algorithm that is dynamically acquired according to an application scenario, including at least one of behavioral sequence pattern features, embedded features, and outlier score features.
17. A feature extraction device comprising a processor and a memory having stored therein at least one instruction that is displayed and executed by the processor to implement the feature extraction method of any one of claims 1-8.
18. A computer readable storage medium having stored therein at least one instruction for display and execution by a processor to implement the feature extraction method of any one of claims 1-8.
CN201910087341.XA 2019-01-29 2019-01-29 Feature extraction method, device, equipment and readable storage medium Active CN111488899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910087341.XA CN111488899B (en) 2019-01-29 2019-01-29 Feature extraction method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910087341.XA CN111488899B (en) 2019-01-29 2019-01-29 Feature extraction method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111488899A CN111488899A (en) 2020-08-04
CN111488899B true CN111488899B (en) 2024-02-23

Family

ID=71811515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910087341.XA Active CN111488899B (en) 2019-01-29 2019-01-29 Feature extraction method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111488899B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301523A (en) * 2004-04-08 2005-10-27 Celestar Lexico-Sciences Inc Apparatus and method for predicting vaccine candidate partial sequence, apparatus and method for predicting mhc-binding partial sequence, program and recording medium
CN101382998A (en) * 2008-08-18 2009-03-11 华为技术有限公司 Testing device and method of switching of video scenes
DE102012206313A1 (en) * 2012-04-17 2013-10-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device for recognizing unusual acoustic event in audio recording, has detection device detecting acoustic event based on error vectors, which describe deviation of test vectors from approximated test vectors
CN103761635A (en) * 2014-01-14 2014-04-30 大连理工大学 Three-dimensional multi-box specially-structured cargo loading optimizing method
CN107807997A (en) * 2017-11-08 2018-03-16 北京奇虎科技有限公司 User's portrait building method, device and computing device based on big data
EP3309793A1 (en) * 2016-10-17 2018-04-18 Hitachi, Ltd. Controlling a device based on log and sensor data
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机***有限公司 Motion characteristic acquisition methods, device and storage medium
CN108491526A (en) * 2018-03-28 2018-09-04 腾讯科技(深圳)有限公司 Daily record data processing method, device, electronic equipment and storage medium
CN108509979A (en) * 2018-02-28 2018-09-07 努比亚技术有限公司 A kind of method for detecting abnormality, server and computer readable storage medium
CN108574839A (en) * 2017-03-08 2018-09-25 杭州海康威视数字技术股份有限公司 A kind of tollgate devices method for detecting abnormality and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133456A1 (en) * 2000-12-11 2002-09-19 Lancaster John M. Systems and methods for using derivative financial products in capacity-driven industries
US7606425B2 (en) * 2004-09-09 2009-10-20 Honeywell International Inc. Unsupervised learning of events in a video sequence
US8321604B2 (en) * 2010-08-27 2012-11-27 Total Phase, Inc. Real-time USB class level decoding
US11016730B2 (en) * 2016-07-28 2021-05-25 International Business Machines Corporation Transforming a transactional data set to generate forecasting and prediction insights

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005301523A (en) * 2004-04-08 2005-10-27 Celestar Lexico-Sciences Inc Apparatus and method for predicting vaccine candidate partial sequence, apparatus and method for predicting mhc-binding partial sequence, program and recording medium
CN101382998A (en) * 2008-08-18 2009-03-11 华为技术有限公司 Testing device and method of switching of video scenes
DE102012206313A1 (en) * 2012-04-17 2013-10-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device for recognizing unusual acoustic event in audio recording, has detection device detecting acoustic event based on error vectors, which describe deviation of test vectors from approximated test vectors
CN103761635A (en) * 2014-01-14 2014-04-30 大连理工大学 Three-dimensional multi-box specially-structured cargo loading optimizing method
EP3309793A1 (en) * 2016-10-17 2018-04-18 Hitachi, Ltd. Controlling a device based on log and sensor data
CN108574839A (en) * 2017-03-08 2018-09-25 杭州海康威视数字技术股份有限公司 A kind of tollgate devices method for detecting abnormality and device
CN107807997A (en) * 2017-11-08 2018-03-16 北京奇虎科技有限公司 User's portrait building method, device and computing device based on big data
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机***有限公司 Motion characteristic acquisition methods, device and storage medium
CN108509979A (en) * 2018-02-28 2018-09-07 努比亚技术有限公司 A kind of method for detecting abnormality, server and computer readable storage medium
CN108491526A (en) * 2018-03-28 2018-09-04 腾讯科技(深圳)有限公司 Daily record data processing method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hyperparameter selection of one-class support vector machine by self-adaptive data shifting;Wang, Siqi;PATTERN RECOGNITION;全文 *
基于应用层记录的Web挖掘数据源模型研究;董祥和;仲丛友;董荣和;;计算机工程与设计(第01期);全文 *

Also Published As

Publication number Publication date
CN111488899A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN108924737B (en) Positioning method, device, equipment and computer readable storage medium
CN107784089B (en) Multimedia data storage method, processing method and mobile terminal
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN108320756B (en) Method and device for detecting whether audio is pure music audio
CN112084811B (en) Identity information determining method, device and storage medium
CN111177137B (en) Method, device, equipment and storage medium for data deduplication
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN110942046B (en) Image retrieval method, device, equipment and storage medium
CN112148899A (en) Multimedia recommendation method, device, equipment and storage medium
CN113032587B (en) Multimedia information recommendation method, system, device, terminal and server
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN111192072B (en) User grouping method and device and storage medium
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN110471614B (en) Method for storing data, method and device for detecting terminal
CN110890969A (en) Method and device for mass-sending message, electronic equipment and storage medium
CN112989198B (en) Push content determination method, device, equipment and computer-readable storage medium
CN107944024B (en) Method and device for determining audio file
CN111797017B (en) Method, device, test equipment and storage medium for storing log
CN112001442B (en) Feature detection method, device, computer equipment and storage medium
CN110737692A (en) data retrieval method, index database establishment method and device
CN111428080B (en) Video file storage method, video file search method and video file storage device
CN108595104B (en) File processing method and terminal
CN111488899B (en) Feature extraction method, device, equipment and readable storage medium
CN113936240A (en) Method, device and equipment for determining sample image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant