CN109947985A - Applied to the Processing with Neural Network method and device in on-line system - Google Patents
Applied to the Processing with Neural Network method and device in on-line system Download PDFInfo
- Publication number
- CN109947985A CN109947985A CN201910163118.9A CN201910163118A CN109947985A CN 109947985 A CN109947985 A CN 109947985A CN 201910163118 A CN201910163118 A CN 201910163118A CN 109947985 A CN109947985 A CN 109947985A
- Authority
- CN
- China
- Prior art keywords
- model
- neural network
- training
- network
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses the Processing with Neural Network method and devices being applied in on-line system, which comprises neural network model and training sample is obtained, based on the training sample training neural network model to obtain target network;The corresponding model file of the target network is obtained by way of curing model figure, the model file includes the network structure and network parameter of the target network;The model file is loaded and run in on-line system, and the on-line system includes the running environment of the model file.The present invention is improved by the discrete training, solidification load, the links such as application on site in neural network, it ensure that the consistency for the neural network that neural network applied by inline system and discrete environmental training obtain, reduce online development cost, shorten the online development cycle for the neural network that discrete environmental training obtains, and is obviously improved response speed on its line.
Description
Technical field
The present invention relates to field of neural networks, more particularly to applied to the Processing with Neural Network method and dress in on-line system
It sets.
Background technique
Continuous development with neural network in academic research field, the application scenarios being able to use are also more and more,
But neural network structure is complicated, primary off-line training environment can not be grafted directly among service environment, therefore
It is difficult to directly apply to trained neural network among business scenario and goes to carry out forward prediction.The off-line training of neural network
New problem is brought with its application that is separated into of forward prediction in engineering.
In order to solve this technical problem, the off-line training that Open Framework carries out neural network is can be used in the prior art,
And forward prediction is executed by reconstructing trained neural network in actual business scenario, and based on matrix operation library.But
It is to realize that network structure flexibility is inadequate based on matrix library, it is difficult to ensure that the mind of neural network and off-line training in business scenario
Consistency through network, and it is time-consuming larger.
Summary of the invention
The present invention provides the Processing with Neural Network method, apparatus being applied in on-line system and neural network based push away
Recommend system.
In a first aspect, the present invention provides a kind of Processing with Neural Network method applied in on-line system, the method
Include:
Neural network model and training sample are obtained, based on the training sample training neural network model to obtain
Target network;
The corresponding model file of the target network is obtained by way of curing model figure, the model file includes institute
State the network structure and network parameter of target network;
The model file is loaded and run in on-line system, and the on-line system includes the operation of the model file
Environment.
Second aspect provides a kind of Processing with Neural Network device applied in on-line system, and described device includes:
Discrete training module, for obtaining neural network model and training sample, based on described in training sample training
Neural network model is to obtain target network;
Curing module, it is described for obtaining the corresponding model file of the target network by way of curing model figure
Model file includes the network structure and network parameter of the target network;
Module is run, for the model file to be loaded and run in on-line system, the on-line system includes described
The running environment of model file.
The third aspect provides a kind of recommender system neural network based, the system comprises:
Discrete training module, for obtaining neural network model and training sample, based on described in training sample training
Neural network model is to obtain target network;
Curing module, it is described for obtaining the corresponding model file of the target network by way of curing model figure
Model file includes the network structure and network parameter of the target network;
Module is run, for loading and running the model file, the recommendation system in component on the line of recommender system
Component includes the running environment of the model file on the line of system.
Fourth aspect provides a kind of computer readable storage medium, and for storing program, described program is performed reality
A kind of existing Processing with Neural Network method applied in on-line system.
It Processing with Neural Network method, apparatus provided by the invention applied in on-line system and neural network based pushes away
System is recommended, is improved, be ensure that by the discrete training, solidification load, the links such as application on site in neural network
The consistency for the neural network that neural network applied by inline system and discrete environmental training obtain, reduce line development at
This, shortens the online development cycle for the neural network that discrete environmental training obtains, and be obviously improved response speed on its line.
Detailed description of the invention
It in order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology and advantage, below will be to implementation
Example or attached drawing needed to be used in the description of the prior art are briefly described, it should be apparent that, the accompanying drawings in the following description is only
It is only some embodiments of the present invention, for those of ordinary skill in the art, without creative efforts,
It can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of flow chart applied to the Processing with Neural Network method in on-line system provided by the invention;
Fig. 2 is a kind of on-line system construction method flow chart provided by the invention;
Fig. 3 is a kind of neural network load and operation method flow chart based on multi-process provided by the invention;
Fig. 4 is the flow chart provided by the invention that new model file is loaded by load thread;
Fig. 5 is the training sample training neural network model provided by the invention that is based on to obtain discrete trained mesh
Mark the flow diagram of network;
Fig. 6 is a kind of Processing with Neural Network device block diagram applied in on-line system provided by the invention;
Fig. 7 is discrete training module block diagram provided by the invention;
Fig. 8 is recommender system module interaction figure provided by the invention;
Fig. 9 is the foundation structure schematic diagram of Tensorflow cluster provided by the invention;
Figure 10 is curing module block diagram provided by the invention;
Figure 11 is operation module frame chart provided by the invention;
Figure 12 is operational module block diagram provided by the invention;
Figure 13 is the flow diagram provided by the invention recommended online;
Figure 14 is that a kind of recommender system neural network based provided by the invention is the recommendation that user carries out video recommendations
Result schematic diagram.
Figure 15 is a kind of hardware knot of equipment for realizing method provided by the embodiment of the present invention provided by the invention
Structure schematic diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those of ordinary skill in the art without making creative work it is obtained it is all its
His embodiment, shall fall within the protection scope of the present invention.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, containing the process, method of a series of steps or units, system, product or server need not limit
In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce
The other step or units of product or equipment inherently.
The neural network of training is difficult to be applied directly to service environment under offline environment, and reason may be varied
, by deep learning model be applied to online recommender system for, by the Application of Neural Network under offline environment in business ring
It needs to overcome the programming language of the neural network under building offline environment can not be compatible with service environment during border, offline environment
Under neural network be difficult to be loaded on line, be unable to reach the time-consuming of business using the offline neural network reconstructed on line and require etc.
Problems.The technical issues of in order to solve to be difficult with the neural network of training under offline environment in service environment, this hair
Bright embodiment provides a kind of Processing with Neural Network method applied in on-line system, as shown in Figure 1, which comprises
S101. neural network model and training sample are obtained, based on the training sample training neural network model
To obtain target network.
Specifically, the embodiment of the present invention does not limit the concrete type of neural network model, the neural network model packet
Include but be not limited to deep learning neural network model, intensified learning neural network model, transfer learning neural network model or depth
Spend intensified learning neural network model, service environment that various neural network models are applicable in and solve the problems, such as may respectively not
It is identical, it can be selected according to actual business scenario.
Further, each neural network model may include the specific network structure of one or more, in selected mind
On the basis of network model, certain specific network structure can also be selected, according to actual business scenario with deep learning
It can be the specific network structures such as Wide&Deep, DeepFM, LSTM for neural network model.
It include two parts, the part respectively Wide and the part Deep in Wide&Deep model, it is intended to so that trained
To model can be remembered (memorization) and extensive (generalization) ability simultaneously.
DeepFM model includes two parts: part of neural network and Factorization machine part are each responsible for low order feature
Extract the extraction with high-order feature.The shared same input of this two parts is suitable for preferably excavating and use in recommender system
Family hobby.
LSTM is shot and long term memory network, is a kind of time recurrent neural network, is suitable for processing and predicted time sequence
It is middle to be spaced and postpone relatively long critical event.In recommender system, it is suitably applied the prediction scene of serializing.
Step S101 can be executed under discrete environment, can be according to fixed frequency in order to be continuously available target network
Rate executes step S101, and specific execution method will be detailed below.
S103. the corresponding model file of the target network, the model file are obtained by way of curing model figure
Network structure and network parameter including the target network.
It may include multiple neurodes in the network structure of the discrete training objective network, and each neurode
There is its own corresponding parameter, network parameter described in this step is the parameter for including each neurode.The embodiment of the present invention
The parameter of neurode not being limited specifically, the type of parameter is related with the concrete type of target network and specific structure,
For example the parameter can be weight, threshold value, link number of node etc..
Specifically, model parameter is led in such a way that network parameter and network structure blend in the embodiment of the present invention
Cross and be converted to the mode of constant and be solidificated in the network structure of target network, by business scenario stress model file come
Guarantee the consistency of the structure of the neural network used online in target network and business scenario.By stress model file, i.e.,
It may make service environment while obtaining the network structure and network parameter of target network.
S105. the model file is loaded and run in on-line system, and the on-line system includes the model file
Running environment.
Specifically, the embodiment of the present invention makes discrete instruction by way of loading the model file among on-line system
The target network got can be applied among on-line system, enable on-line system by running the model file
Related service function is completed based on target network.Specifically, in order to enable the on-line system can run the model file,
The embodiment of the invention also provides a kind of on-line system construction method, the method is as shown in Figure 2, comprising:
S10. operator needed for extracting neural network model, and according to extraction result generating operator library.
Specifically, the operator is operator needed for neural network model operation, and the operation in various neural networks all may be used
To be obtained according to relational operator combination,
S20. the network structure of neural network model is extracted, and application foundation is constructed according to extraction result and the Operator Library
Library.
Specifically, the application foundation library is the basis that the on-line system is capable of moving model file.
The network structure of the neural network model can be carried out from the neural network of the prior art by component of increasing income
It extracts, and is obtained by the way that result will be extracted according to the modification that on-line system carries out adaptationization.On-line system for user for mentioning
For various business services, thus the application foundation library form of static library can be used as the running environment of on-line system and
It plays a role.
Specifically, Operator Library obtained in step S102 can be added in the form of compiling file before static library compiling
It is downloaded in system, and extraction result that step S104 is obtained and the Operator Library is completed to compile together in the way of static library
Process.The form that static library why is used in the embodiment of the present invention is because of in link stage, the target that can generate compilation
File is linked together with the static library referred to and is bundled in executable file, more friendly to the use of on-line system.
On-line system is after above-mentioned building process in the embodiment of the present invention, the mould that can be obtained to various neural networks
Type file is supported, so that the neural network decoupling that on-line system and discrete environment obtain closes, and is enabled to
Linear system system, which has, quickly provides the ability of business service based on model file, and it is time-consuming to reduce business.
Further, it in order to enable on-line system can provide continuous service for user, is based in the embodiment of the present invention
Multithreading completes the load and operation of model file, and during loading the thread load model file, active thread is not
Discontinuously the current model file based on the on-line system provides business service for user, until the load thread is to new
Model file loaded, active thread switching provides business service using new model file for user.Specifically,
The embodiment of the present invention provide it is a kind of based on multi-process neural network load and operation method, as shown in figure 3, the method packet
It includes:
S1051. service is provided based on the "current" model file of the on-line system by active thread, and is added by load thread
Carry new model file.
Specifically, the new model file can be model file obtained in step S102.In order to obtain in time from
The model file obtained under thread environment, active thread detects whether to generate new according to prefixed time interval in the embodiment of the present invention
Model file, if so, starting to execute loading procedure.
Specifically, business service is provided for multi-user in order to parallel, load thread and operation line in the embodiment of the present invention
Cheng Jun can have more than one.
Specifically, the active thread is used to obtain the relevant information of user, according to the relevant information construction feature, and
Service is provided based on the feature and model file.
It is finished if S1053. the new model file is loaded, the load thread triggers the active thread and uses institute
It states new model file and business service is provided.
Specifically, the load thread can trigger active thread by the update flag bit modified in the on-line system
Using new model file, active thread can also be triggered in the form of by sending load completion notice to active thread and used
New model file.
Specifically, described that new model file is loaded by load thread, as shown in Figure 4, comprising:
S10511. the first abstract of the model file is obtained.
Specifically, the first abstract of the model file can generate in step s 103, and be passed in step s105
Transport to on-line system.
Whether the key parameter for S10513. extracting the model file judges the value of the key parameter in preset conjunction
To obtain the first anticipation result in method threshold value.
It is described if the value of the key parameter is in preset legal threshold value in a feasible embodiment
First anticipation result is true, is false otherwise.
S10515. obtain the model file in the on-line system second abstract, it is more described first abstract and it is described
Second abstract is to obtain the second anticipation result.
In the feasible embodiment, if first abstract is less than preset threshold with the second abstract difference,
Then the second anticipation result is true, is false otherwise.
S10517. according to it is described first anticipation result and it is described second anticipation as a result, judge the model file whether be
Model file to be loaded.
In the feasible embodiment, can according to it is described first anticipation result and second anticipation result and logic
Value is to determine whether load the model file, if described and logical value is very, to determine that the model file is mould to be loaded
Type file, the model file can be loaded.
Judge before carrying out the load of model file by the first anticipation result and the second anticipation result, abnormal mould can be evaded
The load bring logic error of type file, the model file that can unsuccessfully obtain to avoid load off-line training, also avoids loading
The undesirable model file of file size.
S10519. if so, loading the model file.
Further, if not, not reloading the model file.
Further, described to train the neural network model to obtain discrete training objective net based on the training sample
Network is as shown in Figure 5, comprising:
S1011. training data is obtained.
Specifically, the training data can be by acquiring big data, and the dirty data filtered out in the big data obtains,
Dirty data described in the embodiment of the present invention can be the data of missing key message or the data of user's illegal act generation.
S1013. the training data is sampled according to the neural network model got.
Specifically, sampling includes positive and negative specimen sample being carried out to training data, and the ratio of positive negative sample is relative to institute
It is reasonable for stating for the training of neural network model, and the process of sampling includes down-sampled and over-sampling.
S1015. the information architecture feature in sampled result is extracted, and based on the feature training neural network mould
Type, the construction method of the feature are identical as the construction method of feature used in the on-line system.
Specifically, the feature construction method that step S1015 is used is identical as the feature construction method that active thread uses, from
And ensure that the neural network of off-line training can be quickly and accurately applied to on-line system.
The embodiment of the invention provides a kind of Processing with Neural Network methods applied in on-line system, by nerve net
The links such as the discrete training of network, solidification load, application on site improve, and ensure that nerve applied by inline system
The consistency for the neural network that network and discrete environmental training obtain, reduces online development cost, shortens discrete environmental training
The online development cycle of obtained neural network, and it has been obviously improved response speed on its line.
Another embodiment of the present invention provides a kind of Processing with Neural Network devices applied in on-line system, such as Fig. 6 institute
Show, described device includes:
Discrete training module 201, for obtaining neural network model and training sample, based on training sample training institute
Neural network model is stated to obtain target network.
Curing module 202, for obtaining the corresponding model file of the target network, institute by way of curing model figure
State the network structure and network parameter that model file includes the target network.
Module 203 is run, for the model file to be loaded and run in on-line system, the on-line system includes institute
State the running environment of model file.
A kind of Processing with Neural Network apparatus and method embodiment applied in on-line system disclosed by the embodiments of the present invention
It is based on identical inventive concept.
Described in the embodiment of the present invention be applied to on-line system in Processing with Neural Network apparatus and method can be used in for
User provides diversified business service, preferred embodiment a kind of, and the embodiment of the present invention specifically provides a kind of base
In the recommender system of neural network, the recommender system includes:
Discrete training module 301, for obtaining neural network model and training sample, based on training sample training institute
Neural network model is stated to obtain target network.
Specifically, the discrete training module 301 as shown in fig. 7, comprises:
Model type selecting unit 3011, for selecting the neural network model for training.
Training data acquiring unit 3012, for obtaining training data.The training unit 3011 can be used for original
Data are effectively pre-processed, and to improve the premise of instruction accuracy, the exposure log of user, broadcasting log, video are just arranged rope
Draw and Figure Characteristics log, merged according to station number, while removing the dirty data of the effectively single feature of missing, to guarantee
Effective single characteristic information of every sample is without deletion condition.
Sampling unit 3013, for being sampled according to the neural network model got to the training data.It is described
Sampling unit 3013 should make positive negative sample be maintained in reasonable ratio in specific sampling process, while according to pushing away
The requirement for recommending system carries out sample filtering, for example, to filter playing duration under certain threshold value for recommending video
Sample, and the positive negative sample setting scheme of segment threshold is taken for different video duration and viewing duration, it is also needed when specifically executing
To carry out comprehensive consideration in conjunction with selected neural network model.
Fisrt feature construction unit 3014, for according to the information architecture feature in sampled result.
Model training unit 3015, based on the feature training neural network model.
Specifically, as shown in figure 8, the training data acquiring unit 3012 of the discrete training module 301, sampling unit
3013 and fisrt feature construction unit 3014 can be based on Spark off-line calculation cluster to realize, Spark is for extensive
The cluster Computing Platform of the united analysis engine of data processing and a realization Universal-purpose quick;And model type selecting unit 3011
It can be realized based on Tensorflow cluster with model training unit 3015, Tensorflow cluster is open source component, can be with
For carrying out the processing of neural network, as shown in figure 9, it illustrates the foundation structures of Tensorflow cluster comprising towards
User's applies logical layer, and the application logical layer supports Phthon language and C Plus Plus, provides and connect for the application logical layer
The TensorAPI (application programming interfaces) and bottom Distributed master (distribution server), Dataflow of mouth
Executor (data flow actuator), Network layers (network layer), Device layers (hardware layer).
Curing module 302, for obtaining the corresponding model file of the target network, institute by way of curing model figure
State the network structure and network parameter that model file includes the target network.
Specifically, the curing module 302 is as shown in Figure 10, comprising:
Solidified cell 3021, for obtaining model file;Model push unit 3022, for model file to be pushed to fortune
Row module 303.Specifically, finally obtained binary pattern file is passed through timed task according to pre- by model push unit 3022
If the frequency, daily or by the hour push to operation module 303, while by model file content generate abstract transmit together,
It is ready for following model verification.
The solidified cell 3021 is based on Tensorflow cluster with model push unit 3022 to realize.
Module 303 is run, for loading and running the model file, the recommendation in component on the line of recommender system
Component includes the running environment of the model file on the line of system.
Specifically, the operation module 303 is as shown in figure 11, comprising:
Running environment component 3031, for operator needed for neural network model, and according to extraction result generating operator library;It mentions
The network structure of neural network model is taken, and application foundation library is constructed according to extraction result and the Operator Library.
Update module 3032, for being verified by load thread and loading new model file.
Operational module 3033, for being that user carries out recommendation service based on model file.
Specifically, the operational module 3033 is as shown in figure 12, comprising:
Candidate pool unit 30331, for obtaining user related information.
Specifically, the concrete function of the candidate pool unit 30331 can be related with the specific requirements of the recommender system,
For example, candidate's pool unit 30331 can be applied to recommender system recall or ranking function in, when applying in call back function
When, candidate pool unit 30331 can pass through the interest of user portrait a part of information content of prescreening;When applying in ranking function
When, candidate pool unit 30331 is the information content returned during recalling.
Second feature construction unit 30332, for according to user related information construction feature.Specifically, second feature structure
It is identical as fisrt feature construction unit 3014 to build unit 30332.For being recommended using DeepFM model, module is run
303 receive users' request, obtain the relevant information of user, while recalling to obtain each user in advance and requesting Shi Yuqi interest relevant
Video feeds pull the correlated characteristic of each video feeds, can obtain in conjunction with user characteristics and fisrt feature construction unit
3014 obtained consistent characteristic formats.
On-line prediction module 30333, for providing recommendation service by the forward prediction of neural network for user.With
For DeepFM network, when being recommended, the related interests that can predict to obtain each user recall the pre- of video feeds in advance
Clicking rate score is surveyed, and sorts the video feeds for obtaining TOPN as final result according to the size of score.
Figure 13 is please referred to, it illustrates implement the process recommended online signal based on each functional module of above-mentioned recommender system
Figure.It is successively executed in Spark cluster in the flow diagram and selects training pattern from training library, sampling obtains training number
Three steps of feature construction are carried out according to according to sample information;According to model type selecting result and feature in Tensorflow cluster
It constructs result and carries out model training to obtain model, and carry out subsequent solidification and model push, wherein the result of model type selecting
It is also pushed to online recommender system for carrying out operator extension.In online recommender system, by being verified to model, admittedly
Change load and substitutes automatically updating for implementation model;And it is obtained described in support by the result combination static library that operator extends
The library of model running;The content of candidate pool is realized into online recommend with according to the feature input model of user information building.
As shown in figure 14, it illustrates be to use based on a kind of recommender system neural network based described in the embodiment of the present invention
The recommendation results of family progress video recommendations.
A kind of recommender system neural network based provided in an embodiment of the present invention innovatively proposes a kind of suitable application
The frame of the discrete training result of the general neural network of lightweight in personalized recommendation system, can be by the nerve of off-line training
Network is conveniently and efficiently applied in online recommender system, and the program can carry out good with existing recommendation system framework
Fusion enables online recommender system to complete the forward prediction of model under the premise of requirement when ensuring compliance with low consumption, simultaneously
The network structure consistency for guaranteeing off-line training model Yu on-line prediction model by cured mode, greatly reduces and opens online
Cost is sent out, the model online experimental development period is shortened.Technical solution mentioned by the embodiment of the present invention has already passed through multiple types
The test of model can satisfy recommender system requirements, improve recommender system overall efficiency, apply in specific business
Every core index has been driven to be promoted larger.
The embodiment of the invention also provides a kind of computer storage medium, the computer storage medium can store more
Item instruction, described instruction are suitable for being loaded as processor and being executed a kind of applied in on-line system described in the embodiment of the present invention
Processing with Neural Network method and step, specific implementation procedure can illustrating with embodiment of the method, herein without repeating.
Further, Figure 15 shows a kind of hardware knot of equipment for realizing method provided by the embodiment of the present invention
Structure schematic diagram, the equipment can be terminal, mobile terminal or server, and the equipment, which may also participate in, constitutes this hair
Device provided by bright embodiment or recommender system.As shown in figure 15, terminal 10 (or mobile device 10 or server
It 10) may include that one or more (using 102a, 102b ... ... in figures, 102n to show) (processor 102 can for processor 102
To include but is not limited to the processing unit of Micro-processor MCV or programmable logic device FPGA etc.), storage for storing data
Device 104 and transmitting device 106 for communication function.It in addition to this, can also include: display, input/output interface
(I/O interface), the port universal serial bus (USB) (a port that can be used as in the port of I/O interface is included), network
Interface, power supply and/or camera.It will appreciated by the skilled person that structure shown in figure 15 is only to illustrate, it is not right
The structure of above-mentioned electronic device causes to limit.For example, terminal 10 may also include it is more than shown in Figure 15 or less
Component, or with the configuration different from shown in Figure 15.
It is to be noted that said one or multiple processors 102 and/or other data processing circuits lead to herein
Can often " data processing circuit " be referred to as.The data processing circuit all or part of can be presented as software, hardware, firmware
Or any other combination.In addition, data processing circuit for single independent processing module or all or part of can be integrated to meter
In any one in other elements in calculation machine terminal 10 (or mobile device).As involved in the embodiment of the present application,
The data processing circuit controls (such as the selection for the variable resistance end path connecting with interface) as a kind of processor.
Memory 104 can be used for storing the software program and module of application software, as described in the embodiment of the present invention
Corresponding program instruction/the data storage device of method, the software program that processor 102 is stored in memory 104 by operation
And module realizes the above-mentioned nerve being applied in on-line system thereby executing various function application and data processing
Network processing method.Memory 104 may include high speed random access memory, may also include nonvolatile memory, such as one or
Multiple magnetic storage devices, flash memory or other non-volatile solid state memories.In some instances, memory 104 can be into one
Step includes the memory remotely located relative to processor 102, these remote memories can pass through network connection to computer
Terminal 10.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Transmitting device 106 is used to that data to be received or sent via a network.Above-mentioned network specific example may include
The wireless network that the communication providers of terminal 10 provide.In an example, transmitting device 106 includes that a network is suitable
Orchestration (Network Interface Controller, NIC), can be connected by base station with other network equipments so as to
Internet is communicated.In an example, transmitting device 106 can be radio frequency (Radio Frequency, RF) module,
For wirelessly being communicated with internet.
Display can such as touch-screen type liquid crystal display (LCD), the liquid crystal display aloow user with
The user interface of terminal 10 (or mobile device) interacts.
It should be understood that embodiments of the present invention sequencing is for illustration only, do not represent the advantages or disadvantages of the embodiments.
And above-mentioned this specification specific embodiment is described.Other embodiments are within the scope of the appended claims.One
In a little situations, the movement recorded in detail in the claims or step can be executed according to the sequence being different from embodiment and
Still desired result may be implemented.In addition, process depicted in the drawing not necessarily requires the particular order shown or company
Continuous sequence is just able to achieve desired result.In some embodiments, multitasking and parallel processing it is also possible or
It may be advantageous.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device and
For server example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to side
The part of method embodiment illustrates.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of Processing with Neural Network method applied in on-line system, which is characterized in that the described method includes:
Neural network model and training sample are obtained, based on the training sample training neural network model to obtain target
Network;
The corresponding model file of the target network is obtained by way of curing model figure, the model file includes the mesh
Mark the network structure and network parameter of network;
The model file is loaded and run in on-line system, and the on-line system includes the operation ring of the model file
Border.
2. method according to claim 1, which is characterized in that further include the steps that constructing the on-line system, institute
It states that the on-line system construct and includes:
Operator needed for extracting neural network model, and according to extraction result generating operator library;
The network structure of neural network model is extracted, and application foundation library is constructed according to extraction result and the Operator Library.
3. the method according to claim 1, wherein described load in on-line system and run the model text
Part, comprising:
Service is provided based on the "current" model file of the on-line system by active thread, and loads new model by load thread
File;The active thread is used to obtain the relevant information of user, according to the relevant information construction feature, and is based on the spy
Model file of seeking peace provides service;
It is finished if the new model file is loaded, the load thread triggers the active thread and uses the new model
File provides business service.
4. method according to claim 3, which is characterized in that described to load new model file by load thread, comprising:
Obtain the first abstract of the model file;
The key parameter for extracting the model file, judge the key parameter value whether in preset legal threshold value with
To the first anticipation result;
The second abstract of the model file in the on-line system is obtained, first abstract and second abstract are to obtain
To the second anticipation result;
According to the first anticipation result and second anticipation as a result, judging whether the model file is model text to be loaded
Part;
If so, loading the model file.
5. the method according to claim 1, wherein described based on the training sample training neural network
Model is to obtain target network, comprising:
Obtain training data;
The training data is sampled according to the neural network model got;
The information architecture feature in sampled result is extracted, and based on the feature training neural network model, the feature
Construction method it is identical as the construction method of feature used in the on-line system.
6. a kind of Processing with Neural Network device applied in on-line system, which is characterized in that described device includes:
Discrete training module, for obtaining neural network model and training sample, based on the training sample training nerve
Network model is to obtain target network;
Curing module, for obtaining the corresponding model file of the target network, the model by way of curing model figure
File includes the network structure and network parameter of the target network;
Module is run, for the model file to be loaded and run in on-line system, the on-line system includes the model
The running environment of file.
7. a kind of recommender system neural network based, which is characterized in that the system comprises:
Discrete training module, for obtaining neural network model and training sample, based on the training sample training nerve
Network model is to obtain target network;
Curing module, for obtaining the corresponding model file of the target network, the model by way of curing model figure
File includes the network structure and network parameter of the target network;
Run module, for loading and running the model file in component on the line of recommender system, the recommender system
Component includes the running environment of the model file on line.
8. system according to claim 7, which is characterized in that the discrete training module includes:
Model type selecting unit, for selecting the neural network model for training;
Training data acquiring unit, for obtaining training data;
Sampling unit, for being sampled according to the neural network model got to the training data;
Fisrt feature construction unit, for according to the information architecture feature in sampled result;
Model training unit, based on the feature training neural network model.
9. system according to claim 7, it is characterised in that:
The operation module includes:
Running environment component, for operator needed for neural network model, and according to extraction result generating operator library;Extract nerve net
The network structure of network model, and application foundation library is constructed according to extraction result and the Operator Library;
Update module, for being verified by load thread and loading new model file;
Operational module, for being that user carries out recommendation service based on model file;
The operational module includes:
Candidate pool unit, for obtaining user related information;
Second feature construction unit, for according to user related information construction feature, second feature construction unit and fisrt feature
Construction unit is identical;
On-line prediction module, for providing recommendation service by the forward prediction of neural network for user.
10. a kind of computer readable storage medium, for storing program, which is characterized in that described program is performed realization power
Benefit requires the Processing with Neural Network method being applied in on-line system described in any one of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910163118.9A CN109947985A (en) | 2019-03-05 | 2019-03-05 | Applied to the Processing with Neural Network method and device in on-line system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910163118.9A CN109947985A (en) | 2019-03-05 | 2019-03-05 | Applied to the Processing with Neural Network method and device in on-line system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109947985A true CN109947985A (en) | 2019-06-28 |
Family
ID=67008460
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910163118.9A Pending CN109947985A (en) | 2019-03-05 | 2019-03-05 | Applied to the Processing with Neural Network method and device in on-line system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109947985A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110457589A (en) * | 2019-08-19 | 2019-11-15 | 上海新共赢信息科技有限公司 | A kind of vehicle recommended method, device, equipment and storage medium |
CN110619220A (en) * | 2019-08-09 | 2019-12-27 | 北京小米移动软件有限公司 | Method and device for encrypting neural network model and storage medium |
CN111753950A (en) * | 2020-01-19 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining forward time consumption |
CN111970335A (en) * | 2020-07-30 | 2020-11-20 | 腾讯科技(深圳)有限公司 | Information recommendation method and device and storage medium |
CN113408634A (en) * | 2021-06-29 | 2021-09-17 | 深圳市商汤科技有限公司 | Model recommendation method and device, equipment and computer storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229549A (en) * | 2017-12-28 | 2018-06-29 | 杭州大搜车汽车服务有限公司 | A kind of intelligent recognition car trader fits up method, electronic equipment and the storage medium of degree |
CN108281177A (en) * | 2017-11-20 | 2018-07-13 | 刘性祥 | A kind of Internet of Things intensive care monitoring system |
CN108665064A (en) * | 2017-03-31 | 2018-10-16 | 阿里巴巴集团控股有限公司 | Neural network model training, object recommendation method and device |
CN108984731A (en) * | 2018-07-12 | 2018-12-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Sing single recommended method, device and storage medium |
-
2019
- 2019-03-05 CN CN201910163118.9A patent/CN109947985A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665064A (en) * | 2017-03-31 | 2018-10-16 | 阿里巴巴集团控股有限公司 | Neural network model training, object recommendation method and device |
CN108281177A (en) * | 2017-11-20 | 2018-07-13 | 刘性祥 | A kind of Internet of Things intensive care monitoring system |
CN108229549A (en) * | 2017-12-28 | 2018-06-29 | 杭州大搜车汽车服务有限公司 | A kind of intelligent recognition car trader fits up method, electronic equipment and the storage medium of degree |
CN108984731A (en) * | 2018-07-12 | 2018-12-11 | 腾讯音乐娱乐科技(深圳)有限公司 | Sing single recommended method, device and storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110619220A (en) * | 2019-08-09 | 2019-12-27 | 北京小米移动软件有限公司 | Method and device for encrypting neural network model and storage medium |
CN110619220B (en) * | 2019-08-09 | 2022-03-11 | 北京小米移动软件有限公司 | Method and device for encrypting neural network model and storage medium |
CN110457589A (en) * | 2019-08-19 | 2019-11-15 | 上海新共赢信息科技有限公司 | A kind of vehicle recommended method, device, equipment and storage medium |
CN111753950A (en) * | 2020-01-19 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for determining forward time consumption |
CN111753950B (en) * | 2020-01-19 | 2024-02-27 | 杭州海康威视数字技术股份有限公司 | Forward time consumption determination method, device and equipment |
CN111970335A (en) * | 2020-07-30 | 2020-11-20 | 腾讯科技(深圳)有限公司 | Information recommendation method and device and storage medium |
CN111970335B (en) * | 2020-07-30 | 2021-09-07 | 腾讯科技(深圳)有限公司 | Information recommendation method and device and storage medium |
CN113408634A (en) * | 2021-06-29 | 2021-09-17 | 深圳市商汤科技有限公司 | Model recommendation method and device, equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109947985A (en) | Applied to the Processing with Neural Network method and device in on-line system | |
CN106528395B (en) | The generation method and device of test case | |
CN103997544B (en) | A kind of method and apparatus of resource downloading | |
CN109857668A (en) | UI automated function test method, test device, test equipment and storage medium | |
CN112036577B (en) | Method and device for applying machine learning based on data form and electronic equipment | |
Gómez-Sanz et al. | Model driven development and simulations with the INGENIAS agent framework | |
CN110942387A (en) | Method and system for establishing electric ticket business function based on micro-service | |
CN110493812A (en) | Processing method, device, system, processor and the main control device of distribution test | |
CN109522041A (en) | Client resource update method, device, processor, server and terminal | |
JP2014182821A (en) | System and method for identifying and intervening in common opportunity | |
CN109409738A (en) | Method, the electronic device of deep learning are carried out based on block platform chain | |
CN110833693B (en) | Game running method and device, storage medium and electronic device | |
CN110032507A (en) | Applied program testing method, device, system, electronic equipment and storage medium | |
CN107741950A (en) | Processing method, device, processor and the service end of data syn-chronization task | |
CN108600311A (en) | The method and device of client simulation interface data | |
CN108228444A (en) | A kind of test method and device | |
CN109614312A (en) | Method for generating test case, device, electronic equipment and storage medium | |
CN111143223A (en) | Server pressure testing method and device | |
CN113407327A (en) | Modeling task and data analysis method, device, electronic equipment and system | |
CN111459621B (en) | Cloud simulation integration and scheduling method and device, computer equipment and storage medium | |
CN110752964B (en) | Network equipment testing method and device | |
CN114065948A (en) | Method and device for constructing pre-training model, terminal equipment and storage medium | |
CN109710303A (en) | The multi version parallel developing method and system of interactive voice product | |
CN110427998A (en) | Model training, object detection method and device, electronic equipment, storage medium | |
CN112559525A (en) | Data checking system, method, device and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |