- Help Center
- Machine Learning
-
Data Science Bootcamp
-
Python Programming
-
Machine Learning
-
Data Analysis
-
Pricing
-
Registration
-
R Language
-
SQL
-
Power BI
-
Homework and Notebooks
-
Platform Related Issues
-
Programming and Tools
-
Large Language Models Bootcamp
-
Blog
-
Employment Assistance
-
Partnerships
-
Data Science for Business
-
Python for Data Science
-
Introduction to Power BI
-
Agentic AI Bootcamp
-
Practicum
-
Bootcamps
-
Free Courses
What is a Batch?
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Think of a batch as a for-loop iterating over one or more samples and making predictions. At the end of the batch, the predictions are compared to the expected output variables and an error is calculated. From this error, the update algorithm is used to improve the model, e.g. move down along the error gradient.
A training dataset can be divided into one or more batches.
When all training samples are used to create one batch, the learning algorithm is called batch gradient descent. When the batch is the size of one sample, the learning algorithm is called stochastic gradient descent. When the batch size is more than one sample and less than the size of the training dataset, the learning algorithm is called mini-batch gradient descent.