Get your pass for only
Ā£249
Taking a leaf from the recent advances in LLMs, it is important to have a pretext task to build a base model that can train unsupervised or with minimal supervision on large datasets. On top of this base model, task specific DMS models can be trained on significantly smaller datasets that have high quality labels. The typical unsupervised approach is to train an autoencoder neural network (or similar). While this works well and results in better performance than not having this step, in this work, we propose a better unsupervised training algorithm to train the base model neural network for DMS tasks. In addition to showing an improved performance of this model compared to using the vanilla autoencoder approach and the ātrain from scratchā approach, we also show the new method results in more interpretable models.