Huang, Jimmy XiangjiJahan, Israt2024-10-282024-10-282024-07-262024-10-28https://hdl.handle.net/10315/42384Recently, Large Language Models (LLMs) have demonstrated impressive capability to solve a wide range of tasks. However, despite their success across various tasks, no prior work has investigated their capability in the biomedical domain yet. To this end, this thesis aims to evaluate the performance of LLMs on benchmark biomedical tasks. For this purpose, a comprehensive evaluation of 4 popular LLMs in 6 diverse biomedical tasks across 26 datasets has been conducted. Interestingly, this evaluation shows that in biomedical datasets that have smaller training sets, zero-shot LLMs even outperform the current state-of-the-art models when they were fine-tuned only on the training set of these datasets. This suggests that pretraining on large text corpora makes LLMs quite specialized even in the biomedical domain. The findings also shows that not a single LLM can outperform other LLMs in all tasks, with the performance of different LLMs may vary depending on the task. While their performance is still quite poor in comparison to the biomedical models that were fine-tuned on large training sets, this study demonstrates that LLMs have the potential to be a valuable tool for various biomedical tasks that lack large annotated data.Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.BiologyArtificial intelligenceBioinformaticsStudying The Effectiveness Of Large Language Models In Benchmark Biomedical TasksElectronic Thesis or Dissertation2024-10-28BioinformaticsLarge language modelsLLMs in biologyChatGPTBiomedical text processing tasks