
PyTorch
4 days ago · Distributed Training Scalable distributed training and performance optimization in research and production is enabled by the torch.distributed backend.
Get Started - PyTorch
CUDA 13.0 ROCm 6.4 CPU pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu126
PyTorch 2.x
Learn about PyTorch 2.x: faster performance, dynamic shapes, distributed training, and torch.compile.
PyTorch documentation — PyTorch 2.9 documentation
Extending PyTorch Extending torch.func with autograd.Function Frequently Asked Questions Getting Started on Intel GPU Gradcheck mechanics HIP (ROCm) semantics Features for large …
Previous PyTorch Versions
OSX macOS is currently not supported in LTS. Linux and Windows # CUDA 10.2 pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio==0.8.2 --extra-index-url …
Learning PyTorch with Examples
In PyTorch we can easily define our own autograd operator by defining a subclass of torch.autograd.Function and implementing the forward and backward functions. We can then …
Domains - PyTorch
PyTorch on XLA Devices PyTorch runs on XLA devices, like TPUs, with the torch_xla package. This document describes how to run your models on these devices.
PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter
Jun 15, 2021 · PyTorch 1.9 extends support for the new torch.profiler API to more builds, including Windows and Mac and is recommended in most cases instead of the previous …
PyTorch 2.7 Release
Apr 23, 2025 · Enable torch.compile on Windows 11 for Intel GPUs, delivering the performance advantages over eager mode as on Linux. Optimize the performance of PyTorch 2 Export Post …
Build the Neural Network — PyTorch Tutorials 2.9.0+cu128 …
The torch.nn namespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses the nn.Module. A neural network is a module …