A torch implementation of a recursion which turns out to be useful for RNN-T
This project implements a method for faster and more memory-efficient RNN-T loss computation, called pruned rnnt. Note: There is also a fast RNN-T loss implementation in k2 project, which shares the same code here. We make fast_rnnt a stand-alone project in case someone wants only this rnnt loss. How does the pruned-rnnt work ? We first obtain pruning bounds for the RNN-T recursion using a simple joiner network that is just an addition of the encoder and decoder, then we […]
Read more