Adapting pre-trained LLMs for specific downstream tasks.
Adapting a pre-trained model to a specific labeled dataset.
Methods like LoRA for fine-tuning LLMs with minimal computation.
Fine-tuning on a collection of tasks described by natural language instructions.
Designing effective prompts to guide LLM behavior, including Chain-of-Thought.