Custom and Distributed Training with TensorFlow
- 4.8
Approx. 24 hours to complete
Course Summary
Learn how to customize and distribute TensorFlow training on multiple machines with this course. Explore the architecture of distributed training, use TensorFlow's High-Level API to train models, and scale training to multiple GPUs and machines.Key Learning Points
- Understand distributed training with TensorFlow
- Learn how to customize TensorFlow training to your needs
- Scale training to multiple GPUs and machines
Related Topics for further study
Learning Outcomes
- Understand the architecture of distributed training
- Train models using TensorFlow's High-Level API
- Scale training to multiple GPUs and machines
Prerequisites or good to have knowledge before taking this course
- Familiarity with machine learning concepts
- Basic knowledge of TensorFlow
Course Difficulty Level
IntermediateCourse Format
- Online
- Self-paced
Similar Courses
- Advanced Machine Learning with TensorFlow on Google Cloud Platform
- Applied Data Science with Python
Related Education Paths
Notable People in This Field
- Andrew Ng
- Ian Goodfellow
Related Books
Description
In this course, you will:
Outline
- Differentiation and Gradients
- A conversation with Andrew Ng: Overview of course 2
- What is a tensor?
- Creating tensors in code
- Math operations with tensors
- Basic Tensors code walkthrough
- Broadcasting, operator overloading and Numpy compatibility
- Evaluating variables and changing data types
- Gradient Tape
- Gradient Descent using Gradient Tape
- Calculate gradients on higher order functions
- Persistent=true and higher order gradients
- Gradient Tape basics code walkthrough
- Connect with your mentors and fellow learners on Slack!
- Reference: CNN for visual recognition
- Tensors and Gradient Tape
- Custom Training
- Custom Training Loop steps
- Loss and gradient descent
- Define Training Loop and Validate Model
- Training Basics code walkthrough
- Training steps and data pipeline
- Define the training loop
- Gradients, metrics, and validation
- Fashion MNIST Custom Training Loop code walkthrough
- Reference: tf.keras.metrics
- Custom Training
- Graph Mode
- Benefits of graph mode
- Generating graph code
- AutoGraph Basics code walkthrough
- Control dependencies and flows
- Loops and tracing variables
- AutoGraph code walkthrough
- Reference: Fizz Buzz
- AutoGraph
- Distributed Training
- Intro to distribution strategies
- Types of distribution strategies
- Converting code to the Mirrored Strategy
- Mirrored Strategy code walkthrough
- Custom Training for Multiple GPU Mirrored Strategy
- Multi GPU Mirrored Strategy code walkthrough
- TPU Strategy
- TPU Strategy code walkthrough
- Other Distributed Strategies
- References used in Other Distributed Strategies
- References
- Acknowledgments
- Distributed Strategy
Summary of User Reviews
This course on custom distributed training with TensorFlow received high praise from many users for its comprehensive coverage and practical examples. Users appreciated the instructor's expertise and clear explanations.Key Aspect Users Liked About This Course
The practical examples provided in the course were a standout feature for many users, allowing them to apply the concepts to real-world scenarios.Pros from User Reviews
- Comprehensive coverage of custom distributed training with TensorFlow
- Practical examples that help users apply the concepts in real-world scenarios
- Expert instructor with clear explanations
- Engaging assignments that challenge users to hone their skills
- Great community support and resources
Cons from User Reviews
- Some users found the course content to be too advanced for beginners
- The course assumes a certain level of familiarity with TensorFlow
- Some users found the pace of the course to be too fast
- Lack of hands-on coding exercises in some sections
- Some users felt that the course could benefit from more interactive elements