At Swinburne, our honours program emphasises independent research, with your individual research project accounting for 75% of your time, effort, and grade.

Browse honours projects and supervisors

Browse the available projects and explore the areas that align with your interests to identify the right research project for you. Once you have picked a project, then email your future supervisor to discuss next steps.

Artificial intelligence applications

Supervisor Dr Armita Zarnegar
Contact azarnegar@swinburne.edu.au
Description

This project has the potential to be expanded into a comprehensive PhD study. However, it has been scoped in a manageable way to be completed within a one-year Honours program. The initial Honours project focuses on the development of a lightweight AI agent that assists students—particularly those with ADHD—in managing their academic workload through timely reminders.

The core functionality of the Honours-level implementation will include: (1) Using one or more prompt-based interactions with a large language model (LLM) to identify upcoming assignment deadlines (2) Automatically generating and sending reminders (3) Integration with a Learning Management System (LMS) such as Canvas to extract relevant due dates.

This prototype will serve as a foundational step toward a more sophisticated AI-powered academic assistant. For students who wish to continue into a Masters or PhD program, the project can be significantly extended. At that level, the AI assistant could evolve into a multimodal, intelligent agent capable of: (4) Parsing and understanding assignment descriptions to break them into manageable subtasks (5) Summarizing lecture content and readings in accessible formats (6) Suggesting first steps and timelines based on the student’s current progress, schedule, and preferred learning style.

The extended project could also explore personalization, neurodiversity-aware UX design, and adaptive interaction models to support sustained engagement and task initiation in students with ADHD.

Further reading

Empowering Student Learning Through Artificial Intelligence: A Bibliometric Analysis

A qualitative systematic review on AI empowered self-regulated learning in higher education

Supervisor Dr Tuan Dung Lai
Contact tuandunglai@swinburne.edu.au
Description

This project involves building an intelligent virtual receptionist system capable of interacting with users in natural language and recognizing returning visitors using facial recognition. Students will integrate a large language model (such as GPT-4 or similar) into a front-end kiosk interface to handle common receptionist tasks such as greeting visitors, scheduling appointments, answering FAQs, and providing directions. The system will also use computer vision and facial recognition (e.g., via OpenCV or FaceNet) to identify known individuals and personalize the interaction.

The project covers a full-stack development pipeline: building a user-friendly interface, integrating backend logic using Python/Flask or Node.js, handling data storage with MongoDB or Firebase, and deploying the solution to a local device (e.g. Windows kiosk). The application will be tailored in real-world use cases and workflows (for example, in a nail store, hotel, or healthcare clinic). It offers students practical experience in machine learning, computer vision, prompt engineering, natural language processing, and system deployment, all highly relevant to real-world AI applications. Students will have an opportunity to communicate with real clients and discover their business needs and workflow to implement and design the receptionist.

Further reading

Brown, T. et al. (2020). Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.

Parkhi, O. M. et al. (2015). Deep Face Recognition [PDF 535KB]. British Machine Vision Conference (BMVC).

Supervisor Dr Matthew Mitchell
Contact mmitchell@swinburne.edu.au
Description Work with a team of researchers and surgeons on an app that collects heart rate and other data from surgeons during surgery and provides a dashboard for performance feedback and analysis. The project could contribute towards either data collection or the app design and construction (for iOS).

Language models and natural language processing

Supervisor Dr Armita Zarnegar
Contact azarnegar@swinburne.edu.au
Description

This honours project aims to evaluate and compare the effectiveness of various large language models (LLMs) combined with NLP methods in identifying topics within textual data. The project will use a mixed-methods approach, combining qualitative thematic analysis with quantitative evaluation metrics, to assess the coherence, granularity, and relevance of the topics generated by different models and methodologies. Students will explore and compare methods such as: (1) Prompt-based topic extraction using GPT-style models (e.g., GPT-4, Claude, Gemini). (2) Topic modeling approaches (e.g., LDA, BERTopic) combined with Ontologies and embedding-based techniques (e.g., sentence transformers + K-means). (3) Human-coded thematic analysis as a benchmark of research questions. (4) How do different LLMs perform in identifying key themes or topics in comparison to traditional topic modeling techniques. (5) What are the strengths and weaknesses of LLM-based topic extraction methods in various text domains (e.g., social media, academic papers, interviews). (6) How does human-coded thematic analysis compare with AI-driven topic identification? How can we effectively combine the two.

The student will begin with an existing dataset and must identify one or more additional datasets that have either been manually analyzed—such as Reddit threads, customer reviews, student feedback, or transcripts—or are suitable for manual analysis by the student. The project involves testing various large language models (LLMs) and topic modeling techniques to explore thematic topic identification. Evaluation will be conducted using coherence scores, diversity metrics, precision and recall compared to human-identified themes, and qualitative assessments of topic interpretability. The expected outcomes include a critical understanding of the effectiveness of LLMs in identifying thematic topics and recommendations for best practices in combining human and AI-driven analysis. In addition, insights into the strengths of mixed-methods research in NLP should result in quality publication.

Further reading

A survey on neural topic models: methods, applications, and challenges

Thematic-LM: A LLM-based Multi-agent System for Large-scale Thematic Analysis

Supervisor

Viet Vo

Jun Zhang

Contact vvo@swinburne.edu.au
Description

Large Language Models (LLMs), such as GPT and DeepSeek, face practical challenges including hallucinations and outdated knowledge. Retrieval-Augmented Generation (RAG) has emerged as a state-of-the-art approach to mitigate these issues. RAG constructs knowledge bases using either document databases or graph-based structures, enabling LLMs to extend their knowledge and improve response quality. However, RAG is susceptible to poisoning attacks, where malicious contributors inject targeted or untargeted documents to mislead the model’s output.

This project will investigate the practicality of poisoning attacks, specifically evaluating the required amount of poisoned text and the time/effort needed to affect various model architectures. Secondly, it will explore governance algorithms for detecting maliciously injected documents using Graph Neural Networks (GNNs). Finally, the project will design mechanisms to ensure provable and authentic responses from LLMs using cryptographic techniques and watermarking methods to protect against tampering.

Further reading

Practical Poisoning Attacks against Retrieval-Augmented Generation

Query Provenance Analysis: Efficient and Robust Defense against Query-based Black-box Attacks (IEEE S&P'25)

Security and privacy in AI systems

Supervisor Professor Vincent S Wen
Contact swen@swinburne.edu.au
Description

The rapid advancement of artificial intelligence has led to the widespread adoption of AI development platforms such as TensorFlow, PyTorch, and Hugging Face Transformers. While these platforms have significantly accelerated the development and deployment of AI models, they also introduce new and often overlooked security risks. This project is focused on identifying, analysing, and mitigating vulnerabilities that may exist within these platforms, which could be exploited to compromise model integrity, leak sensitive data, or disrupt AI-based services. The research will adopt a multi-pronged approach.

First, it will perform a comprehensive audit of the platforms’ core components and third-party dependencies using static and dynamic code analysis techniques. Second, advanced fuzz testing and symbolic execution will be applied to uncover logic flaws, unsafe memory operations, and vulnerabilities in serialization/deserialization processes, plugin mechanisms, and runtime environments. Third, the project will simulate realistic threat scenarios to assess how these vulnerabilities might be leveraged in adversarial settings, including model poisoning, privilege escalation, and remote code execution.

A key deliverable of the project will be a vulnerability taxonomy tailored to AI platforms, along with a set of best practices and security hardening guidelines for AI developers and platform maintainers. The project aims to enhance the security posture of AI ecosystems, ensuring safer adoption in high-stakes domains such as healthcare, finance, defence, and critical infrastructure.

Further reading Yinlin Deng, Chenyuan Yang, Anjiang Wei, and Lingming Zhang. 2022. Fuzzing deep-learning libraries via automated relational API inference. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2022). Association for Computing Machinery, New York, NY, USA, 44–56.
Supervisor

Dr Viet Vo

Dr Jun Zhang

Contact vvo@swinburne.edu.au
Description

Graph Neural Networks (GNNs) leverage node embeddings to represent node attributes and link relationships, enabling powerful applications in node classification and link prediction. As a result, GNNs are widely adopted in graph-topology-based applications and systems, such as intrusion detection, electronic commerce, drug discovery, and social networks (particularly in contexts like COVID-19 treatment). However, the privacy of nodes—including their attributes and connections—becomes a sensitive issue during the training of GNNs, especially in the presence of external attackers and adversarial (semi-honest or malicious) insiders with centralized GNN setting. Prior studies have proposed various cryptographic schemes and differential privacy algorithms to prevent attackers from inferring such private information. Despite this, secure aggregation of GNNs in distributed settings remains largely unexplored.

Therefore, this project will first investigate the extent to which attackers can infer node information and model weights during GNN training. It then designs and evaluates efficient schemes leveraging cryptographic techniques and differential privacy to protect both node attributes and model parameters from adversaries. Finally, it will implement and evaluate the proposed solutions against existing privacy-preserving methods in GNNs.

Further reading

OblivGNN: Oblivious Inference on Transductive and Inductive Graph Neural Network (Usenix Security 24)

GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation

Supervisor

Viet Vo

Jun Zhang

Contact vvo@swinburne.edu.au
Description

Reinforcement Learning (RL) is widely applied in safety-critical decision-making scenarios such as autonomous driving, agent-based movement simulations, and pilot or flight training navigation. However, RL is vulnerable to backdoor attacks, where adversaries can inject malicious triggers during training or manipulate the learned model weights to induce abnormal decision-making behavior—potentially causing vehicle collisions or drone crashes.

Although several studies have explored such attacks by exploiting the memory buffer and reward functions, these approaches may be less effective in practice due to simple defensive strategies that constantly monitor memory access and reward signals.

This project will first investigate the extent to which the memory buffer in practical RL systems can be compromised. It then designs more practical and adaptive backdoor attacks that leverage contextual information to bypass existing defenses. Lastly, it will implement and evaluate both the proposed attacks and potential defenses in simulation environments and on prototype robotic hardware.

Further reading

SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents (NIPS'24)

Adversarial Inception for Bounded Backdoor Poisoning in Deep Reinforcement Learning (ICLR'25)

UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep Reinforcement Learning

Supervisor research interests

Supervisor Associate Professor Ali Yavari
Contact ayavari@swinburne.edu.au
Supervisor Dr Ati Kia
Contact akia@swinburne.edu.au
Supervisor Dr Huai Liu
Contact hliu@swinburne.edu.au
Supervisor

Dr Man Lau

Contact elau@swinburne.edu.au
Supervisor Professor Prem Prakash Jayaraman
Contact pjayaraman@swinburne.edu.au
Supervisor Dr Rui Zhou
Contact rzhou@swinburne.edu.au
Supervisor Dr Sheng Wen
Contact swen@swinburne.edu.au
Supervisor

Dr Viet Vo

Contact vvo@swinburne.edu.au

General enquiries about honours

Please contact Dr Edmonds Lau, Honours Coordinator elau@swinburne.edu.au.

Interested in the Bachelor of Computer Science (Honours)?

From state-of-the-art facilities to opportunities to engage with industry – this course is designed with your future in mind. Let's get started.

Visit course page