Dev

Install TensorFlow on Ubuntu 24.04 / 22.04 with GPU and CPU Support

TensorFlow is Google’s open-source machine learning framework used for deep learning, computer vision, natural language processing, and building production ML pipelines. It runs on CPUs, GPUs, and TPUs, making it one of the most widely adopted frameworks for training and deploying neural networks at scale.

Original content from computingforgeeks.com - post 5308

This guide walks through installing TensorFlow 2.21 on Ubuntu 24.04 LTS and Ubuntu 22.04 LTS with both CPU and GPU support. We cover pip installation in a virtual environment, NVIDIA GPU setup with CUDA, Docker-based workflows, and Jupyter Notebook integration. The official TensorFlow documentation at tensorflow.org has additional platform-specific notes.

Prerequisites

  • Ubuntu 24.04 LTS or Ubuntu 22.04 LTS (x86_64)
  • Python 3.10, 3.11, 3.12, or 3.13 (TensorFlow 2.21 dropped Python 3.9 support)
  • At least 4 GB RAM for CPU training, 8 GB+ recommended for GPU workloads
  • sudo or root access
  • For GPU support: NVIDIA GPU with CUDA Compute Capability 3.5 or higher, driver version 525.60.13 or newer

Step 1: Update System and Install Python

Start by updating packages and installing Python with the development headers and venv module.

sudo apt update && sudo apt upgrade -y

Install Python 3, pip, and the venv module. Ubuntu 24.04 ships Python 3.12 by default, and Ubuntu 22.04 ships Python 3.10 – both are supported by TensorFlow 2.21.

sudo apt install -y python3 python3-pip python3-venv python3-dev

Confirm your Python version is 3.10 or higher:

python3 --version

On Ubuntu 24.04, this returns Python 3.12.x. On Ubuntu 22.04, you get Python 3.10.x. Both work with TensorFlow 2.21.

Step 2: Create a Python Virtual Environment

Always install TensorFlow inside a virtual environment to avoid conflicts with system Python packages. This is the recommended approach from Google’s official install guide.

python3 -m venv ~/tensorflow-env

Activate the virtual environment:

source ~/tensorflow-env/bin/activate

Upgrade pip inside the virtual environment to avoid installation issues:

pip install --upgrade pip

Your shell prompt should now show (tensorflow-env) at the beginning, confirming the virtual environment is active. If you need a refresher on pip and Python package management on Ubuntu, check our dedicated guide.

Step 3: Install TensorFlow (CPU Only)

Since TensorFlow 2.0, CPU and GPU support are bundled in a single package. If you only need CPU support or your machine has no NVIDIA GPU, a simple pip install is all you need.

pip install tensorflow

This installs the latest stable release (2.21.0 at the time of writing) with CPU support. The download is roughly 600 MB including all dependencies like NumPy, Keras, and protobuf.

Step 4: Verify TensorFlow CPU Installation

Run a quick check to confirm TensorFlow loads and prints its version:

python3 -c "import tensorflow as tf; print(tf.__version__)"

The output should show the installed version:

2.21.0

Run a basic computation to confirm everything works:

python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

You should see a tensor result with a random floating point value – this confirms TensorFlow is computing correctly on your CPU.

Step 5: Install TensorFlow with GPU Support

GPU acceleration dramatically speeds up model training – often 10-50x faster than CPU for deep learning workloads. TensorFlow 2.21 requires CUDA 12.5 and cuDNN 9.3 for GPU support on Linux.

Install NVIDIA GPU Drivers

First, check if your system already has NVIDIA drivers installed:

nvidia-smi

If the command returns your GPU model and driver version (525.60.13 or newer), skip to the CUDA section. If not, install the drivers:

sudo apt install -y nvidia-driver-560

Reboot after installing the driver:

sudo reboot

After reboot, verify the driver is loaded:

nvidia-smi

The output should display your GPU model, driver version, and CUDA version supported by the driver:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03              Driver Version: 560.35.03      CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3080        Off | 00000000:01:00.0  Off |                  N/A |
|  30%   35C    P8              15W / 320W |      1MiB / 10240MiB  |      0%      Default |
+-----------------------------------------+------------------------+----------------------+

Install TensorFlow with CUDA Support

The easiest way to get GPU support is using the [and-cuda] extra dependency, which automatically installs compatible CUDA and cuDNN libraries inside your virtual environment. Make sure your virtual environment is activated, then run:

pip install 'tensorflow[and-cuda]'

This downloads and installs CUDA 12.5, cuDNN 9.3, and all required NVIDIA libraries alongside TensorFlow. The total download is around 4-5 GB, so it takes a few minutes depending on your connection speed.

Step 6: Verify GPU Detection in TensorFlow

After installation, confirm TensorFlow detects your GPU:

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

If the GPU is detected, you see a list with your device:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

An empty list [] means TensorFlow does not see the GPU – check the troubleshooting section below if that happens.

For more detailed GPU information, run this check:

python3 -c "
import tensorflow as tf
print('TensorFlow version:', tf.__version__)
print('GPU available:', tf.test.is_built_with_cuda())
print('GPU devices:', tf.config.list_physical_devices('GPU'))
"

Step 7: Run a Simple ML Model (MNIST Demo)

Test your TensorFlow installation with a practical example – training a digit classification model on the MNIST dataset. This is a standard benchmark that takes under a minute on CPU and seconds on GPU.

Create a Python script:

vi ~/mnist_test.py

Add the following code:

import tensorflow as tf

# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# Normalize pixel values to 0-1
x_train, x_test = x_train / 255.0, x_test / 255.0

# Build a simple neural network
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10)
])

# Compile and train
model.compile(
    optimizer='adam',
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=['accuracy']
)

model.fit(x_train, y_train, epochs=5, validation_split=0.1)

# Evaluate on test data
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print(f"\nTest accuracy: {test_acc:.4f}")

Run the training script:

python3 ~/mnist_test.py

After 5 epochs, the model should reach around 97-98% test accuracy. On a GPU, each epoch takes a few seconds. On CPU, expect 10-20 seconds per epoch depending on your hardware.

Step 8: Install TensorFlow in Docker

Docker is a clean way to run TensorFlow without modifying your system Python or installing CUDA libraries directly. The official TensorFlow Docker images come pre-configured with all dependencies. If you need Docker on your system, follow our guide on installing Docker and Docker Compose on Ubuntu.

CPU-only Docker Image

Pull and run the latest TensorFlow Docker image:

docker run -it --rm tensorflow/tensorflow:latest python3 -c "import tensorflow as tf; print(tf.__version__)"

GPU Docker Image

For GPU support in Docker, you need the NVIDIA Container Toolkit installed on the host. Install it first:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Then run TensorFlow with GPU access:

docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

The --gpus all flag passes all available GPUs into the container.

Run TensorFlow Interactively in Docker

To get a shell inside the TensorFlow container with your project files mounted:

docker run --gpus all -it --rm -v ~/projects:/workspace -w /workspace tensorflow/tensorflow:latest-gpu bash

This mounts your local ~/projects directory into /workspace inside the container, so your scripts and data are accessible.

Step 9: Install Jupyter Notebook with TensorFlow

Jupyter Notebooks are the standard interactive environment for ML development. Install Jupyter inside your TensorFlow virtual environment:

source ~/tensorflow-env/bin/activate
pip install jupyter

Launch the notebook server:

jupyter notebook --ip=0.0.0.0 --port=8888

Open the URL printed in the terminal (usually http://localhost:8888/?token=...) in your browser. Create a new Python 3 notebook and test TensorFlow:

import tensorflow as tf
print(tf.__version__)
print("GPU available:", len(tf.config.list_physical_devices('GPU')) > 0)

If you are running Jupyter on a remote server, allow port 8888 through the firewall:

sudo ufw allow 8888/tcp

TensorFlow also ships an official Docker image with Jupyter pre-installed:

docker run -it --rm -p 8888:8888 tensorflow/tensorflow:latest-jupyter

Common TensorFlow Errors and Fixes

These are the most frequent issues when installing TensorFlow on Ubuntu and how to fix them.

GPU Not Detected (Empty Device List)

If tf.config.list_physical_devices('GPU') returns an empty list:

  • Verify NVIDIA drivers are installed and working with nvidia-smi
  • Make sure you installed with pip install 'tensorflow[and-cuda]' not just pip install tensorflow
  • Check that your GPU has CUDA Compute Capability 3.5 or higher
  • Restart your Python session after installation – TensorFlow caches device detection

CUDA Version Mismatch

If you see errors about CUDA library versions not matching:

  • The [and-cuda] pip extra installs its own CUDA libraries in the virtual environment, so system-wide CUDA is not required
  • If you have a system-wide CUDA installation that conflicts, set the library path to prefer the pip-installed version:
export LD_LIBRARY_PATH=$VIRTUAL_ENV/lib/python3.12/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH

Out of Memory Errors During Training

If TensorFlow crashes with OOM errors on GPU, enable memory growth so it does not allocate all GPU RAM at startup:

import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)

Add this at the top of your training scripts before any other TensorFlow operations.

ImportError or ModuleNotFoundError

This usually means you are running Python outside the virtual environment. Verify your environment is active:

which python3

The output should point to your virtual environment path, not the system Python:

/home/username/tensorflow-env/bin/python3

If it shows /usr/bin/python3, activate the environment first with source ~/tensorflow-env/bin/activate.

TensorFlow vs PyTorch Comparison

Both TensorFlow and PyTorch are production-ready ML frameworks. Here is a quick comparison to help choose the right one for your project.

FeatureTensorFlowPyTorch
DeveloperGoogleMeta (Facebook)
Execution ModeEager + Graph (tf.function)Eager by default
Production DeploymentTF Serving, TF Lite, TF.jsTorchServe, ONNX
Mobile/EdgeTF Lite (strong)PyTorch Mobile
High-level APIKeras (built-in)torch.nn
Research PopularityStrong in industryDominant in academia
VisualizationTensorBoard (built-in)TensorBoard (via plugin)
Installationpip install tensorflowpip install torch

TensorFlow has an edge in production deployment with its ecosystem of serving tools, mobile support, and browser-based inference through TensorFlow.js. PyTorch is often preferred for research and rapid prototyping due to its Pythonic design. Many teams use both – PyTorch for experimentation and TensorFlow for production serving.

Conclusion

TensorFlow 2.21 is now installed on your Ubuntu system with CPU or GPU support. The [and-cuda] pip extra makes GPU setup much simpler than it was in earlier TensorFlow versions – no manual CUDA installation needed.

For production ML workloads, consider running TensorFlow inside Docker containers for reproducible environments, setting up TensorBoard for training visualization, and enabling mixed precision training with tf.keras.mixed_precision to speed up GPU training by 2-3x on modern NVIDIA GPUs.

Related Articles

Virtualization How To Install Vagrant on Ubuntu 24.04|22.04|20.04 VOIP Install Openfire XMPP Chat Server on Ubuntu 22.04|20.04|18.04 Ubuntu Install Ruby on Ubuntu 24.04 / 22.04 Git Install Sourcegraph code search engine on Ubuntu 24.04|22.04|20.04|

Leave a Comment

Press ESC to close