Have a Question About This Course?





    Image
    Amazon SageMaker Studio helps data scientists prepare, build, train, deploy, and monitor machine learning (ML) models quickly. It does this by bringing together a broad set of capabilities purpose-built for ML. This course prepares experienced data scientists to use the tools that are a part of SageMaker Studio, including Amazon CodeWhisperer and Amazon CodeGuru Security scan extensions, to improve productivity at every step of the ML lifecycle.

    Amazon SageMaker Studio for Data Scientists (ASSDS) Objectives

    • In this course you will learn to:
    • Accelerate the process to prepare build train deploy and monitor ML solutions using Amazon SageMaker Studio

    Need Assistance Finding the Right Training Solution

    Our Consultants are here to assist you

    Key Point of Training Programs

    • Amazon SageMaker Studio for Data Scientists (ASSDS) Prerequisites

      Who should attend
      Experienced data scientists who are proficient in ML and deep learning fundamentals.

      Prerequisites
      We recommend that all attendees of this course have:

      Experience using ML frameworks
      Python programming experience
      At least 1 year of experience as a data scientist responsible for training, tuning, and deploying models
      AWS Technical Essentials digital or classroom training

    • Amazon SageMaker Studio for Data Scientists (ASSDS) Delivery Format

      In-Person

      Online

    • Amazon SageMaker Studio for Data Scientists (ASSDS) Outline

      Day 1

      Module 1: Introduction to MLOps
      Processes
      People
      Technology
      Security and governance
      MLOps maturity model
      Module 2: Initial MLOps: Experimentation Environments in SageMaker Studio
      Bringing MLOps to experimentation
      Setting up the ML experimentation environment
      Demonstration: Creating and Updating a Lifecycle Configuration for SageMaker Studio
      Hands-On Lab: Provisioning a SageMaker Studio Environment with the AWS Service Catalog
      Workbook: Initial MLOps
      Module 3: Repeatable MLOps: Repositories
      Managing data for MLOps
      Version control of ML models
      Code repositories in ML
      Module 4: Repeatable MLOps: Orchestration
      ML pipelines
      Demonstration: Using SageMaker Pipelines to Orchestrate Model Building Pipelines
      Day 2

      Module 4: Repeatable MLOps: Orchestration (continued)
      End-to-end orchestration with AWS Step Functions
      Hands-On Lab: Automating a Workflow with Step Functions
      End-to-end orchestration with SageMaker Projects
      Demonstration: Standardizing an End-to-End ML Pipeline with SageMaker Projects
      Using third-party tools for repeatability
      Demonstration: Exploring Human-in-the-Loop During Inference
      Governance and security
      Demonstration: Exploring Security Best Practices for SageMaker
      Workbook: Repeatable MLOps
      Module 5: Reliable MLOps: Scaling and Testing
      Scaling and multi-account strategies
      Testing and traffic-shifting
      Demonstration: Using SageMaker Inference Recommender
      Hands-On Lab: Testing Model Variants
      Day 3

      Module 5: Reliable MLOps: Scaling and Testing (continued)
      Hands-On Lab: Shifting Traffic
      Workbook: Multi-account strategies
      Module 6: Reliable MLOps: Monitoring
      The importance of monitoring in ML
      Hands-On Lab: Monitoring a Model for Data Drift
      Operations considerations for model monitoring
      Remediating problems identified by monitoring ML solutions
      Workbook: Reliable MLOps
      Hands-On Lab: Building and Troubleshooting an ML Pipeline