This research group consists of interdisciplinary researchers who are interested in exploring various research topics related to large language models and vision language models.

This research group consists of interdisciplinary researchers who are interested in exploring various research topics related to large language models and vision language models. 
There are 4 subgroups focusing on different aspects of Large Language Models (LLMs) / Vision Language Models (VLMs) related research:
 
  1. Foundation models advancement (FMA)
    The FMA subgroup focuses on addressing the limitations of existing LLMs/VLMs, creating justifications for LLM results, and improving security & privacy of LLM/VLMs. 

    Sub group Leader: Jeff Heflin, Computer Science and Engineering, RCEAS
     
  2. Human-centric LLM (HCL)
    The HCL subgroup focuses on understanding how well LLMs/VLMs mimic human behaviors, how to address potential biases in LLM/VLM when they are being used to mimic human behaviors, how to mitigate against misinformation issues that are generated by LLMs/VLMs for example.
     
    Sub group Leader: Rebecca Wang, Marketing, College of Business
     
  3. LLM for CPS / Robotics (LCPS)
    The LCPS subgroup focuses on how to apply LLMs/VLMs to cyber physical systems and robots with formal guarantees. 

    Sub group Leader: Parv Venkitasubramaniam, Electrical and Computer Engineering, RCEAS
     
  4. LLM for healthcare (LLH)
    The LLH subgroup focuses on how to responsibly apply LLMs/VLMs to health care domains. This includes ensuring such models are fair, scalable and accurate.

    Sub group Leader: Mooi Choo Chuah, Computer Science and Engineering, RCEAS