Date Awarded

2024

Document Type

Thesis

Degree Name

Master of Science (M.Sc.)

Department

Computer Science

Advisor

Denys Poshyvanyk

Committee Member

Oscar Chaparro

Committee Member

Adwait Nadkarni

Abstract

Considerable research has been performed with regard to using text-to-text machine learning methods to perform various software engineering tasks. At the same time, contrastive learning has shown promise in other modalities, such as computer vision-related problems, and has been explored to some extent in terms of limited software engineering tasks. We demonstrate that contrastive loss, on its own, is insufficient to surpass current baselines for these tasks; however, we note that there is a high degree of orthogonality in the results from existing and contrastive models. We show that when our contrastive method is used as an additional transfer learning step in the training process, the results contain a large portion of the overlap between the distinct models, as well as producing new positive results, effectively capturing the majority of the results from the distinct models and increasing overall model accuracy. By employing this method, we are able to exceed the baseline accuracy of four software engineering tasks by varying margins, ranging from marginal (<1%) to 262% during single beam tests, with minor improvements at selected other beam sizes, in both single- and multi-task training strategies.

DOI

https://dx.doi.org/10.21220/s2-r118-1j93

Rights

© The Author

Share

COinS