Overall Satisfaction with Amazon SageMaker
We use SageMaker in the engineering and data science departments to host Jupyter notebooks, periodically retrain models, and serve models in production. Data scientists work in Jupyter notebooks hosted on SageMaker notebook instances instead of their local machines. We often inject models into AWS-provided containers, and use SageMaker to provide a managed, auto-scaling HTTP interface.
- SageMaker is useful as a managed Jupyter notebook server. Using the notebook instances' IAM roles to grant access to private S3 buckets and other AWS resources is great. Using SageMaker's lifecycle scripts and AWS Secrets Manager to inject connection strings and other secrets is great.
- SageMaker is good at serving models. The interface it provides is often clunky, but a managed, auto-scaling model server is powerful.
- SageMaker is opinionated about versioning machine learning models and useful if you agree with its opinions.
- SageMaker does not allow you to schedule training jobs.
- SageMaker does not provide a mechanism for easily tracking metrics logged during training.
- We often fit feature extraction and model pipelines. We can inject the model artifacts into AWS-provided containers, but we cannot inject the feature extractors. We could provide our own container to SageMaker instead, but this is tantamount to serving the model ourselves.
- We have been able to deliver data products more rapidly because we spend less time building data pipelines and model servers.
- We can prototype more rapidly because it is easy to configure notebooks to access AWS resources.
- For our use-cases, serving models is less expensive with SageMaker than bespoke servers.
SageMaker is great for serving Jupyter notebooks, particularly if you already use other AWS products, such as S3. SageMaker's model retraining function is useful if you write a few Lambda functions to invoke jobs. Its model serving function is useful if your team has limited resources and is willing to submit to SageMaker's opinions.