diff --git a/README.md b/README.md index 8741dae66589..8cdf33422a56 100644 --- a/README.md +++ b/README.md @@ -235,7 +235,7 @@ CPU inference performance improvements using MKL New loss function: hierarchical softmax * Thanks @yaochengji for the contribution! -Distributed Training with Mulitple Learners +Distributed Training with Multiple Learners * Trainer now accepts multiple parameter learners for distributed training. With this change, different parameters of a network can be learned by different learners in a single training session. This also facilitates distributed training for GANs. For more information, please refer to the [Basic_GAN_Distributed.py](/Examples/Image/GAN/Basic_GAN_Distributed.py) and the [cntk.learners.distributed_multi_learner_test.py](/bindings/python/cntk/learners/tests/distributed_multi_learner_test.py) Operators