As AWS looks to democratise machine learning with Amazon SageMaker, we delve into what is going on under the covers and how it stands out in an increasingly crowded market

Can AWS SageMaker democratise machine learning in the enterprise?

SageMaker is essentially a platform for authoring, training and deploying machine learning algorithms to business applications without much of the manual heavy lifting generally involved, such as provisioning infrastructure and managing and tuning training models.

As Randall Hunt, senior technical evangelist at AWS wrote in a blog post: “Amazon SageMaker is a fully managed end-to-end machine learning service that enables data scientists, developers, and machine learning experts to quickly build, train, and host machine learning models at scale.

“This drastically accelerates all of your machine learning efforts and allows you to add machine learning to your production applications quickly.”

How does it work?

Under the covers this means hosted Jupyter notebook integrated development environments (IDEs) for data exploration, cleaning, and preprocessing.

Then there is a distributed model building, training, and validation service where users can pick an AWS algorithm off the shelf, import a popular framework like TensorFlow or write and deploy their own algorithm with Docker containers, directly within SageMaker.

For training, you simply specify a location in S3 and the instance you want to use and in one click SageMaker spins up an isolated cluster and software defined network with autoscaling and data pipelines to start training. Then, when you are done it tears down the cluster.

HTTPs endpoints are used for model hosting, which can scale to support traffic and allow you to A/B test multiple models simultaneously. The algorithms can be deployed straight into production using EC2 instances with one click, after which it will be deployed with autoscaling across availability zones.

Tuning models is traditionally a trial and error exercise but SageMaker comes with what AWS calls ‘hyper parameter optimisation (HPO)’. By simply checking a box SageMaker will spin up multiple copies of the training model and uses machine learning to look at each change in parallel and tune parameters accordingly.

Democratising machine learning

The key message for AWS CEO Andy Jassy is democratising machine learning and AI. “If you want to enable most enterprises and companies to be able to use machine learning in an expansive way, we have to solve the problem of accessibility of everyday developers and scientists,” he said during his re:Invent keynote.

As a result, SageMaker will be fairly model agnostic, supporting all popular frameworks from TensorFlow and Caffe2 to AWS’ own Gluon library.

Jassy said that Google’s popular machine learning framework TensorFlow is already being run on AWS more than anywhere else, which will no doubt annoy the people at Google Cloud Platform. However, Jassy said the the general principle is “we provide all major solutions so you have the tools you need for the right job.” Read more

0 0 votes
Article Rating

Join Our WhatsApp Groups

Lagmen Limited Job Alert 1
Lagmen Limited Job Alert 2

Submit Your Discover News

discovernews@lagmen.net
reachus@lagmen.net

Contact Us Now

Tel: +2348051324267
Tel: +2348094097992
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x