如何在SageMaker图像分类示例的日志中了解速度编号

时间:2018-10-02 23:52:38

标签: mxnet amazon-sagemaker

我正在研究Caltech图像分类笔记本的SageMaker示例:link。我按照笔记本中的步骤进行操作,但是将资源部分更改为使用ml.p3.16xlarge,它具有8个V100 GPU,如下所示:

"ResourceConfig": {
    "InstanceCount": 1,
    "InstanceType": "ml.p3.16xlarge",
    "VolumeSizeInGB": 50
}

训练后检查日志文件时,发现速度只是 895 images/s,与使用单个GPU(p3.2xlarge)非常相似。我猜速度仅适用于单个GPU,使用8个GPU的情况的实际速度应为895 * 8 = 7160。有人可以确认吗?或者我错了。

请参阅下面的完整日志:

Docker entrypoint called with argument(s): train
[10/02/2018 21:40:21 INFO 139764860892992] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/image_classification/default-input.json: {u'beta_1': 0.9, u'gamma': 0.9, u'beta_2': 0.999, u'optimizer': u'sgd', u'use_pretrained_model': 0, u'eps': 1e-08, u'epochs': 30, u'lr_scheduler_factor': 0.1, u'num_layers': 152, u'image_shape': u'3,224,224', u'precision_dtype': u'float32', u'mini_batch_size': 32, u'weight_decay': 0.0001, u'learning_rate': 0.1, u'momentum': 0}
[10/02/2018 21:40:21 INFO 139764860892992] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'learning_rate': u'0.01', u'use_pretrained_model': u'1', u'epochs': u'2', u'num_training_samples': u'15420', u'num_layers': u'18', u'mini_batch_size': u'512', u'image_shape': u'3,224,224', u'num_classes': u'257'}
[10/02/2018 21:40:21 INFO 139764860892992] Final configuration: {u'optimizer': u'sgd', u'learning_rate': u'0.01', u'epochs': u'2', u'lr_scheduler_factor': 0.1, u'num_layers': u'18', u'precision_dtype': u'float32', u'mini_batch_size': u'512', u'num_classes': u'257', u'beta_1': 0.9, u'beta_2': 0.999, u'use_pretrained_model': u'1', u'eps': 1e-08, u'weight_decay': 0.0001, u'momentum': 0, u'image_shape': u'3,224,224', u'gamma': 0.9, u'num_training_samples': u'15420'}
[10/02/2018 21:40:21 INFO 139764860892992] Using pretrained model for initalizing weights
[10/02/2018 21:40:21 INFO 139764860892992] ---- Parameters ----
[10/02/2018 21:40:21 INFO 139764860892992] num_layers: 18
[10/02/2018 21:40:21 INFO 139764860892992] data type: <type 'numpy.float32'>
[10/02/2018 21:40:21 INFO 139764860892992] epochs: 2
[10/02/2018 21:40:21 INFO 139764860892992] optimizer: sgd
[10/02/2018 21:40:21 INFO 139764860892992] momentum: 0.900000
[10/02/2018 21:40:21 INFO 139764860892992] weight_decay: 0.000100
[10/02/2018 21:40:21 INFO 139764860892992] learning_rate: 0.010000
[10/02/2018 21:40:21 INFO 139764860892992] lr_scheduler_step defined without lr_scheduler_factor, will be ignored...
[10/02/2018 21:40:21 INFO 139764860892992] mini_batch_size: 512
[10/02/2018 21:40:21 INFO 139764860892992] image_shape: 3,224,224
[10/02/2018 21:40:21 INFO 139764860892992] num_classes: 257
[10/02/2018 21:40:21 INFO 139764860892992] num_training_samples: 15420
[10/02/2018 21:40:21 INFO 139764860892992] augmentation_type: None
[10/02/2018 21:40:21 INFO 139764860892992] kv_store: device
[10/02/2018 21:40:21 INFO 139764860892992] checkpoint_frequency: 2
[10/02/2018 21:40:21 INFO 139764860892992] multi_label: 0
[10/02/2018 21:40:21 INFO 139764860892992] --------------------
[21:40:21] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.8.0. Attempting to upgrade...
[21:40:21] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
[10/02/2018 21:40:21 INFO 139764860892992] Setting number of threads: 63
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:634: only 32 out of 56 GPU pairs are enabled direct access. It may affect the performance. You can set MXNET_ENABLE_GPU_P2P=0 to turn it off
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: .vvvv...
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: v.vv.v..
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: vv.v..v.
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: vvv....v
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: v....vvv
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: .v..v.vv
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: ..v.vv.v
[21:41:02] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/kvstore/././comm.h:643: ...vvvv.
[21:41:03] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.2.x.288.0/RHEL5_64/generic-flavor/src/src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[10/02/2018 21:41:18 INFO 139764860892992] Epoch[0] Batch [20]#011Speed: 903.34 samples/sec#011accuracy=0.020554
[10/02/2018 21:41:23 INFO 139764860892992] Epoch[0] Train-accuracy=0.055990
[10/02/2018 21:41:23 INFO 139764860892992] Epoch[0] Time cost=21.168
[10/02/2018 21:41:30 INFO 139764860892992] Epoch[0] Validation-accuracy=0.257747
[10/02/2018 21:41:42 INFO 139764860892992] Epoch[1] Batch [20]#011Speed: 895.73 samples/sec#011accuracy=0.393694
[10/02/2018 21:41:47 INFO 139764860892992] Epoch[1] Train-accuracy=0.439128
[10/02/2018 21:41:47 INFO 139764860892992] Epoch[1] Time cost=17.307
[10/02/2018 21:41:48 INFO 139764860892992] Saved checkpoint to "/opt/ml/model/image-classification-0002.params"
[10/02/2018 21:41:53 INFO 139764860892992] Epoch[1] Validation-accuracy=0.561719

1 个答案:

答案 0 :(得分:1)

速度编号表示所有GPU的速度。培训速度取决于批处理大小以及网络大小。 p3.16x的批处理容量是p3.2x的8倍。如果您可以相应地增加批量大小以查看速度提高,则将很有帮助。