The BUddy Post – HKBU Alumni Affairs Office

December 2025
All Issues:
Year
2025
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
Month
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
1
3
4
5
6
7
8
9
10
11
12
3
6
9
12
3
6
9
12
Go

People・Discovery

HKBU has nurtured over 178,000 graduates since its inception in 1956 and numerous alumni have in the past made notable achievements in diverse areas and professions. In this connection, we are pleased to introduce alumni from different professions and invite them to share their knowledge discovery with us.

Sharing by Alumnus Dr Pei Kexin

Building Trustworthy AI for a Safer Digital World

Alumnus Dr Pei Kexin

Alumnus Dr Pei Kexin (Computing Studies) currently serves as a Neubauer Family Assistant Professor of Computer Science at the University of Chicago. He discovered his passion for research during his studies at HKBU. After graduating at HKBU, he completed his PhD at Columbia University in the City of New York, the United States.

Today, his research sits at the exciting intersection of security, software engineering, and machine learning. His work focuses on data-driven program analysis and developing machine learning models that can reason about program structure and behaviour, enabling precise and efficient detection and fixing of software bugs and vulnerabilities.

His works have earned him numerous accolades, including the Best Paper Award in Symposium on Operating Systems Principles (SOSP), a Distinguished Artifact Award, recognition as a Communications of the Association for Computing Machinery (CACM) Research Highlight, and Cyber Security Awareness Week (CSAW) Applied Research Competition Runner-Up. He has also collaborated with the Learning for Code team at Google DeepMind, building cutting-edge program analysis tools powered by large language models.

1. Looking back at your time at HKBU, what experiences or mentors inspired you to pursue research in computer science? Was there a defining moment or experience that led you to focus on it? What advice would you give to students and early-career professionals aspiring to pursue a career in research or academia?

I was very fortunate to join the summer research programme working with Professor Haibo Hu, Professor Jianliang Xu, and Professor Xiaowen Chu. While working in the database group led by Prof Hu and Prof Xu, I collaborated closely with the PhD and MPhil students in the group, and also received close supervision from the faculty, particularly Prof Haibo Hu.

The defining moment or experience for me was when I independently developed a new algorithm that improved the performance of the system we implemented by orders of magnitude. It was an extremely rewarding experience that made me realise I am truly passionate about research.

The research resources in HKBU are abundant and transdisciplinary learning is more than a trend. In fact, research opportunities nowadays often arise at the intersection of multiple disciplines. Fresh perspectives from different disciplines can often drag the mindset that is too used to a single field, so it becomes an “echo chamber”. I found it extremely effective in bringing insights from one field to another, where one discipline often becomes a “disruptive technology” to the other. In particular, my research has always been on the intersection of two camps in AI: neuro and symbolic, where both have complementary attributes critical to building a trustworthy system. These two camps were historically flourishing in different fields. My research bridges these two philosophies in different disciplines lead to impactful findings and results.

I advise students to keep exploring research opportunities with various faculty members, remain curious about solving hard problems, stay open-minded about broad disciplines, and explore opportunities at their intersections.

After graduating from HKBU, Dr Pei continued his pursuit of knowledge and went on to earn a PhD at Columbia University in New York City, the United States.

2. Your research bridges security, software engineering, and machine learning. What inspired you to focus on this intersection? Can you explain, in simple terms, how machine learning can “reason” about program structure and behavior? And what real-world problems are you hoping to solve?

Modern software systems have grown so complex that it’s very challenging to fully understand and ensure their security and reliability, especially as they scale. Traditionally, computer scientists have used strict, rule-based methods to analyse software, but these approaches can struggle with things like natural language (for example, comments written by developers) and often don’t handle large, complicated programs well.

Machine learning offers a different way to tackle these challenges. While it can be less precise or harder to explain compared to traditional methods, it can handle ambiguity and scale much better. My research is about finding ways to combine both approaches—using the strengths of machine learning and traditional reasoning methods—to solve important problems in areas like autonomous vehicles, medical diagnosis, robotics, and systems that connect computers with the real world.

To put it simply, machine learning helps us “reason” about software in a way that’s similar to how humans do. For instance, when people read a piece of code, they don’t always follow every detail; instead, they use intuition, look at key operations, and read comments to get a general sense of what the code does. Machine learning can imitate this process, analysing both the structure of the program and any natural language hints to make guesses about its behaviour.

The real challenge is finding the right balance between precise, “rule-based reasoning” and the more flexible, “intuitive reasoning” that machine learning provides. By integrating both reasoning approaches, we hope to make software safer and more reliable, ultimately helping solve real-world problems where security and accuracy are critical.

With strong code reasoning capabilities, machine learning models can quickly identify vulnerabilities in large-scale software, analyse malicious behaviours of the malware, and even fix them without expensive human expert intervention. This can greatly improve the security and reliability of the software, which has taken over our daily lives.

more photos

During the interview, alumnus Pei expresses his gratitude for HKBU’s “whole-person education” philosophy, which encouraged him to pursue activities beneficial to both body and mind, such as playing basketball and the piano, in addition to his academic studies.

During the interview, alumnus Pei expresses his gratitude for HKBU’s “whole-person education” philosophy, which encouraged him to pursue activities beneficial to both body and mind, such as playing basketball and the piano, in addition to his academic studies.

During the interview, alumnus Pei expresses his gratitude for HKBU’s “whole-person education” philosophy, which encouraged him to pursue activities beneficial to both body and mind, such as playing basketball and the piano, in addition to his academic studies.

3. AI is becoming increasingly integrated into our lives. If you could imagine the “AI of the future,” what would it look like? What role do you see human creativity and reasoning playing alongside AI-driven tools in the future of coding and research? Finally, what excites you most — and what concerns you most — about the direction AI research is heading?

I imagine that AI will significantly increase human productivity in problem-solving and creating new world. The difficulty of many tasks will be redefined, and the demand for many services will increase because of the increasing accessibility, instead of being replaced and diminished. Research and coding will be significantly accelerated, but the capabilities to vet and validate AI-generated outputs will become even more critical.

What excites me most is the opportunity to build trustworthy AI systems that can make safe decisions, ensuring that the entire digital system empowering our world is reliable and secure. This is also what I am mostly concerned about – how can we ensure AI systems are trustworthy, responsible, secure, and reliable?

In the future, I believe that researchers and developers will be most valuable not for writing every line of code themselves, but for knowing how to think creatively with AI, clearly explain what they want systems to do, and make good decisions about design, safety, and real-world impact. As AI takes over routine tasks, human strengths like problem-solving, communication, critical thinking, and understanding how complex systems fit together become even more important.

Students can prepare by learning the basics of coding and AI tools, practicing how to collaborate with AI rather than rely on it, and building strong foundations in reasoning, ethics, and domain knowledge. In short: the future belongs to people who can work with AI thoughtfully, not just use it.