Grounded Compositional Outputs for Adaptive Language Modeling
Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A language model’s emph{vocabulary}—typically selected before training and permanently fixed later—affects its size and is part of what makes it resistant to such adaptation… Prior work has used compositional input embeddings based on surface forms to ameliorate this issue. In this work, we go one step beyond and […]
Read more