Transformer-based language models have achieved state-of-the-art performance on NLP classification tasks, but full fine-tuning of all model parameters is resource-intensive. This article surveys efficient alternatives to full fine-tuning for smaller transformer models (e.g., BERT-base or similar) in classification settings.