A detailed comparison of Google Colab’s compute units consumption across different computing instances. This article evaluates the cost-effectiveness of options like Tesla T4 and A100 GPUs, as well as CPU and TPU instances.
Colab consumption measurement table
Here are the values I measured. The measurement was made on October 1, 2024 with the Colab Pro tariff. The Colab Pro+ tariff differs only in the amount of credits and priority access to GPUs, not in credit consumption, which should be the same. The Colab Free tariff does not consume any credits, so it is not included in this table.
Name | CPU | RAM | Disk | GPU | GPU RAM | compute units/h |
CPU | 2 | 13 GB | 226 GB | – | – | 0.07 |
CPU High‑RAM | 8 | 51 GB | 226 GB | – | – | 0.14 |
TPU v2‑8 | 96 | 335 GB | 226 GB | – | – | 1.76 |
T4 | 2 | 13 GB | 236 GB | T4 | 15 GB | 1.58 |
T4 High‑RAM | 8 | 51 GB | 236 GB | T4 | 15 GB | 1.67 |
L4 | 12 | 53 GB | 236 GB | L4 | 22 GB | 3.00 |
A100 | 12 | 84 GB | 236 GB | A100 | 40 GB | 10.59 |
Interpretation of results
- Only CPU and T4 instances distinguish between High RAM and Low RAM. All other instances are always High RAM.
- Low RAM instances always have only 2 CPU cores, so they can be slower in almost all tasks. Since the same High RAM instance consumes only slightly more credits/hour, it can be said that Low RAM does not pay much. Except for some simple experiments, I don’t use them at all.
- The cheapest instance is CPU Low RAM
- High RAM doesn’t just mean more RAM, it also means more CPU cores, but it doesn’t affect disk space.
- Disk space is 236 GB for all GPU instances (T4, L4 or A100) and 226 GB for all non-GPU instances (CPU and TPU).
- The most expensive, and by a lot, is the A100 instance. It is also the most powerful GPU available, suitable for massive AI computations.
- TPU instance – very cheap and interesting – look at the parameters: 335 GB RAM, 96 CPU cores and consumption of only 1.76 credits/hour. Even if you don’t have a use directly for the TPU, this is an extremely powerful instance at a low price. It is advantageous for e.g. processing large videos with FFmpeg or MoviePy.
- The older GPUs (V100 and P100) that we were used to before, and on which I trained e.g. This beach does not exist, are no longer available in the Colab. The same applies to TPU v1.