-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paddle Python API demo [Basically Done] #1005
Conversation
* Extract NewGradientMachine, ParamUpdater, DataProvider.
23ea25b
to
446fccf
Compare
* BasicTrainerDataProvider * BasicDataProviderOps, * BasicGradientMachineTrainOps * Counter * BatchEvaluate * BasicTestDataProvider * TestOnPassEnd
…to feature/jupyter_docker
1c85a72
to
704ed1e
Compare
|
||
|
||
def main(): | ||
api.initPaddle("-use_gpu=false", "-trainer_count=4") # use 4 cpu cores |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没有看到initPaddle的地方?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在这里了 https://github.com/PaddlePaddle/Paddle/pull/1005/files#diff-ed4a9a57af56fa9b94fd891bdc87f629R101
似乎github会把一些大文件给隐藏掉。。所以,这个文件在files里面默认没显示,得加上load diff
* Extract Network helpers from trainer * Remove always passed context param. * Adding comments
605b1aa
to
3ceee61
Compare
2642791
to
2fdadf5
Compare
8e86c49
to
3b1d08b
Compare
Paddle Python API
需求 单机线下训练
需求 单机多目标训练
需求 多网络训练
需求 OnlineLearning
目前Paddle的Trainer逻辑def train_logic(network_graph, optimize_settings):
gradient_machine = create_gradient_machine(network_graph)
parameter_updater = create_parameter_updater(optimize_settings)
parameter_updater.init(gradient_machine.getParams())
gradient_machine.start()
for pass_id in range(num_passes):
gradient_machine.start_pass()
parameter_updater.start_pass()
train_data.reset()
for each_batch in train_data():
gradient_machine.start_batch()
parameter_updater.start_batch()
gradient_machine.forward_backward(each_batch)
for each_param in gradient_machine.parameters():
parameter_updater.update(each_param)
parameter_updater.finish_batch()
gradient_machine.finish_batch()
test_data.reset()
parameter_updater.catch_up()
for each_batch in test_data():
gradient_machine.forward(each_batch)
print gradient_machine.evaluate
parameter_updater.finish_pass()
gradient_machine.finish_pass()
gradient_machine.finish() 可见一些操作是成对出现的,比如
比如,对于ParameterUpdater的操作,可以分为
而这些操作GradientMachine也有,测试逻辑也有,DataProvider也有。并且,这些逻辑可以任意组合形成一种特殊的Trainer 比如,默认情况下,每个Pass都会做测试。如果改成每隔是个Pass做一个预测呢?那么我们就可以换一个TesterItem。或者,没训练100个Pass做一次预测呢? 可能的一种抽象方式 Runner+RunnerItem其实这种抽象方式,重点利用的是构件组合的思路,将训练过程中每个阶段不同对象的不同操作分离出来,然后任意组合,形成新的行为。类似的东西类似于golang的中间件或者nodejs的中间件(koa)。 整体抽象如上图所示。我们将一个对象的操作变成一个洋葱圈(RunnerItem),整个洋葱便是(Runner)。这个抽象的意义就是将上面繁杂多变的训练逻辑剥离成有意义的子项目。 同时,这个洋葱还可以互相嵌套。比如,训练逻辑是一个洋葱,而测试逻辑是另一个洋葱。测试可能在训练的任何情况下进行。比如,可能的测试周期是训练200个batch之后进行一次测试。那么就写一个训练Runner的RunnerItem,在on_batch_end的时候,调用测试洋葱的全部流程即可。 同时,训练洋葱只要去掉一部分,就可以变成模型预测(inference)的洋葱。 |
* use list() instead of tokenize * use list() instead of tokenize in taskflow * add max_seq_length in readme * add dynamic predict in text_correction task * fix windows predict bug
Blocking Issue #971
Add a Runner abstract for mnist demo
Add jupyter demo for mnist.
See https://github.com/reyoung/mnist_notebook/blob/master/mnist.ipynb
Add comments