Have a Question About This Course?





    Image

    Certified Tester AI Testing

    AI and Software Engineering
    The ISTQB® Certified Tester AI Testing (CT-AI) certification is a globally recognized credential designed for professionals involved in testing artificial intelligence (AI) systems. This certification provides foundational knowledge of AI concepts, technologies, and the specific challenges associated with testing AI-driven applications. This course covers fundamental AI concepts, testing methodologies for AI systems, unique challenges such as bias and explainability, key quality characteristics like performance and security, and introduces tools and techniques for effective AI system assessment.

    Certified Tester AI Testing Objectives

    • Understand the current state and expected trends of AI
    • Experience the implementation and testing of a ML model and recognize where testers can best influence its quality
    • Understand the challenges associated with testing AI-Based systems such as their self-learning capabilities bias ethics complexity non-determinism transparency and explainability
    • Contribute to the test strategy for an AI-Based system
    • Design and execute test cases for AI-based systems
    • Recognize the special requirements for the test infrastructure to support the testing of AI-based systems
    • Understand how AI can be used to support software testing

    Need Assistance Finding the Right Training Solution

    Our Consultants are here to assist you

    Key Point of Training Programs

    We have different work process to go step by step for complete our working process in effective way.
    • Certified Tester AI Testing Prerequisites

      Yes, the ISTQB Certified AI Tester certification typically requires candidates to have completed the ISTQB Certified Tester Foundation Level (CTFL) as a prerequisite. This ensures that candidates have a basic understanding of software testing principles before advancing to AI-specific testing concepts.

    • Certified Tester AI Testing Training Format

      In-Person

      Online

    • Certified Tester AI Testing Outline

      Module1: Introduction to AI

      1.1 Definition of AI and AI Effect

      1.2 Narrow, General and Super AI

      1.3 AI-Based and Conventional Systems

      1.4 AI Technologies

      1.5 AI Development Frameworks

      1.6 Hardware for AI-Based Systems

      1.7 AI as a Service (AIaaS)

      1.7.1 Contracts for AI as a Service

      1.7.2 AIaaS Examples

      1.8 Pre-Trained Models

      1.8.1 Introduction to Pre-Trained Models

      1.8.2 Transfer Learning

      1.8.3 Risks of using Pre-Trained Models and Transfer Learning

      1.9 Standards, Regulations and AI

      Module 2: Quality Characteristics for AI-Based Systems

      2.1 Flexibility and Adaptability

      2.2 Autonomy

      2.3 Evolution

      2.4 Bias

      2.5 Ethics

      2.6 Side Effects and Reward Hacking

      2.7 Transparency, Interpretability and Explainability

      2.8 Safety and AI

      Module 3: Machine Learning (ML) – Overview

      3.1 Forms of ML

      3.1.1 Supervised Learning

      3.1.2 Unsupervised Learning

      3.1.3 Reinforcement Learning

      3.2 ML Workflow

      3.3 Selecting a Form of ML

      3.4 Factors Involved in ML Algorithm Selection

      3.5 Overfitting and Underfitting

      3.5.1 Overfitting

      3.5.2 Underfitting

      3.5.3 Hands-On Exercise: Demonstrate Overfitting and Underfitting

      Module 4: ML - Data

      4.1 Data Preparation as Part of the ML Workflow

      4.1.1 Challenges in Data Preparation

      4.1.2 Hands-On Exercise: Data Preparation for ML, Validation and Test Datasets in the ML Workflow

      4.2.1 Hands-On Exercise: Identify Training and Test Data and Create an ML Model

      4.3 Dataset Quality Issues

      4.4 Data Quality and its Effect on the ML Model

      4.5 Data Labelling for Supervised Learning

      4.5.1 Approaches to Data Labelling

      4.5.2 Mislabeled Data in Datasets

      Module 5: ML Functional Performance Metrics

      5.1 Confusion Matrix

      5.2 Additional ML Functional Performance Metrics for Classification, Regression and Clustering

      5.3 Limitations of ML Functional Performance Metrics

      5.4 Selecting ML Functional Performance Metrics

      5.4.1 Hands-On Exercise: Evaluate the Created ML Model

      5.5 Benchmark Suites for ML

      Module 6: ML - Neural Networks and Testing

      6.1 Neural Networks

      6.1.1 Hands-On Exercise: Implement a Simple Perceptron

      6.2 Coverage Measures for Neural Networks

      Module 7: Testing AI-Based Systems Overview

      7.1 Specification of AI-Based Systems

      7.2 Test Levels for AI-Based Systems

      7.2.1 Input Data Testing

      7.2.2 ML Model Testing

      7.2.3 Component Testing

      7.2.4 Component Integration Testing

      7.2.5 System Testing

      7.2.6 Acceptance Testing

      7.3 Test Data for Testing AI-based Systems

      7.4 Testing for Automation Bias in AI-Based Systems

      7.5 Documenting an AI Component

      7.6 Testing for Concept Drift

      7.7 Selecting a Test Approach for an ML System

      Module 8: Testing AI-Specific Quality Characteristics

      8.1 Challenges Testing Self-Learning Systems

      8.2 Testing Autonomous AI-Based Systems

      8.3 Testing for Algorithmic, Sample and Inappropriate Bias

      8.4 Challenges Testing Probabilistic and Non-Deterministic AI-Based Systems

      8.5 Challenges Testing Complex AI-Based Systems

      8.6 Testing the Transparency, Interpretability and Explainability of AI-Based Systems

      8.6.1 Hands-On Exercise: Model Explainability

      8.7 Test Oracles for AI-Based Systems

      8.8 Test Objectives and Acceptance Criteria

      Module 9: Methods and Techniques for the Testing of AI-Based Systems

      9.1 Adversarial Attacks and Data Poisoning

      9.1.1 Adversarial Attacks

      9.1.2 Data Poisoning

      9.2 Pairwise Testing

      9.2.1 Hands-On Exercise: Pairwise Testing....

      9.3 Back-to-Back Testing

      9.4 A/B Testing

      9.5 Metamorphic Testing (MT)

      9.5.1 Hands-On Exercise: Metamorphic Testing

      9.6 Experience-Based Testing of AI-Based Systems

      9.6.1 Hands-On Exercise: Exploratory Testing and Exploratory Data Analysis (EDA)

      9.7 Selecting Test Techniques for AI-Based Systems

      Module 10: Test Environments for AI-Based Systems

      10.1 Test Environments for AI-Based Systems

      10.2 Virtual Test Environments for Testing AI-Based Systems

      Module 11: Using AI for Testing

      11.1 AI Technologies for Testing

      11.1.1 Hands-On Exercise:The Use of AI in Testing

      11.2 Using AI to Analyze Reported Defects

      11.3 Using AI for Test Case Generation

      11.4 Using AI for the Optimization of Regression Test Suites

      11.5 Using AI for Defect Prediction

      11.5.1 Hands-On Exercise: Build a Defect Prediction System

      11.6 Using AI for Testing User Interfaces

      11.6.1 Using AI to Test Through the Graphical User Interface (GUI)

      11.6.2 Using AI to Test the GUI.