人才・聚
自1956年成立以來,浸大已培育了逾178,000位畢業生,當中許多校友在各個領域都取得了顯著成就。我們期望讓校友擴闊知識領域,因此發掘來自不同專業領域的校友,並邀請他們分享有關方面的知識。
校友裴可心博士分享
構建值得信賴的人工智能 打造更安全的數字世界
校友裴可心博士
校友裴可心博士 (計算機科學) 現任芝加哥大學計算機科學系 Neubauer 家族助理教授。他在浸大求學期間發現自己對研究的熱情。畢業後,他於美國紐約市哥倫比亞大學完成博士學位。
現時,他的研究專注於安全、軟件工程與機器學習技術等領域。他的工作重點在於數據驅動的程式分析,以及開發能了解程式結構和行為的機器學習模型,從而精準且高效地檢測及修復軟件缺陷和漏洞。
他的研究成果屢獲殊榮,包括 Symposium on Operating Systems Principles 最佳論文獎、傑出技術成果獎、Communications of the Association for Computing Machinery 研究重點,以及 Cyber Security Awareness Week 應用研究比賽亞軍。他亦曾與 Google DeepMind 的 Learning for Code 團隊合作,開發由大型語言模型驅動的尖端程式分析工具。
(訪問內容只供英文版本)
1. Looking back at your time at HKBU, what experiences or mentors inspired you to pursue research in computer science? Was there a defining moment or experience that led you to focus on it? What advice would you give to students and early-career professionals aspiring to pursue a career in research or academia?
I was very fortunate to join the summer research programme working with Professor Haibo Hu, Professor Jianliang Xu, and Professor Xiaowen Chu. While working in the database group led by Prof Hu and Prof Xu, I collaborated closely with the PhD and MPhil students in the group, and also received close supervision from the faculty, particularly Prof Haibo Hu.
The defining moment or experience for me was when I independently developed a new algorithm that improved the performance of the system we implemented by orders of magnitude. It was an extremely rewarding experience that made me realise I am truly passionate about research.
The research resources in HKBU are abundant and transdisciplinary learning is more than a trend. In fact, research opportunities nowadays often arise at the intersection of multiple disciplines. Fresh perspectives from different disciplines can often drag the mindset that is too used to a single field, so it becomes an “echo chamber”. I found it extremely effective in bringing insights from one field to another, where one discipline often becomes a “disruptive technology” to the other. In particular, my research has always been on the intersection of two camps in AI: neuro and symbolic, where both have complementary attributes critical to building a trustworthy system. These two camps were historically flourishing in different fields. My research bridges these two philosophies in different disciplines lead to impactful findings and results.
I advise students to keep exploring research opportunities with various faculty members, remain curious about solving hard problems, stay open-minded about broad disciplines, and explore opportunities at their intersections.
於浸大畢業後,裴博士好學不倦,繼續深造並在美國紐約市哥倫比亞大學完成博士學位。
2. Your research bridges security, software engineering, and machine learning. What inspired you to focus on this intersection? Can you explain, in simple terms, how machine learning can “reason” about program structure and behavior? And what real-world problems are you hoping to solve?
Modern software systems have grown so complex that it’s very challenging to fully understand and ensure their security and reliability, especially as they scale. Traditionally, computer scientists have used strict, rule-based methods to analyse software, but these approaches can struggle with things like natural language (for example, comments written by developers) and often don’t handle large, complicated programs well.
Machine learning offers a different way to tackle these challenges. While it can be less precise or harder to explain compared to traditional methods, it can handle ambiguity and scale much better. My research is about finding ways to combine both approaches—using the strengths of machine learning and traditional reasoning methods—to solve important problems in areas like autonomous vehicles, medical diagnosis, robotics, and systems that connect computers with the real world.
To put it simply, machine learning helps us “reason” about software in a way that’s similar to how humans do. For instance, when people read a piece of code, they don’t always follow every detail; instead, they use intuition, look at key operations, and read comments to get a general sense of what the code does. Machine learning can imitate this process, analysing both the structure of the program and any natural language hints to make guesses about its behaviour.
The real challenge is finding the right balance between precise, “rule-based reasoning” and the more flexible, “intuitive reasoning” that machine learning provides. By integrating both reasoning approaches, we hope to make software safer and more reliable, ultimately helping solve real-world problems where security and accuracy are critical.
With strong code reasoning capabilities, machine learning models can quickly identify vulnerabilities in large-scale software, analyse malicious behaviours of the malware, and even fix them without expensive human expert intervention. This can greatly improve the security and reliability of the software, which has taken over our daily lives.
3. AI is becoming increasingly integrated into our lives. If you could imagine the “AI of the future,” what would it look like? What role do you see human creativity and reasoning playing alongside AI-driven tools in the future of coding and research? Finally, what excites you most — and what concerns you most — about the direction AI research is heading?
I imagine that AI will significantly increase human productivity in problem-solving and creating new world. The difficulty of many tasks will be redefined, and the demand for many services will increase because of the increasing accessibility, instead of being replaced and diminished. Research and coding will be significantly accelerated, but the capabilities to vet and validate AI-generated outputs will become even more critical.
What excites me most is the opportunity to build trustworthy AI systems that can make safe decisions, ensuring that the entire digital system empowering our world is reliable and secure. This is also what I am mostly concerned about – how can we ensure AI systems are trustworthy, responsible, secure, and reliable?
In the future, I believe that researchers and developers will be most valuable not for writing every line of code themselves, but for knowing how to think creatively with AI, clearly explain what they want systems to do, and make good decisions about design, safety, and real-world impact. As AI takes over routine tasks, human strengths like problem-solving, communication, critical thinking, and understanding how complex systems fit together become even more important.
Students can prepare by learning the basics of coding and AI tools, practicing how to collaborate with AI rather than rely on it, and building strong foundations in reasoning, ethics, and domain knowledge. In short: the future belongs to people who can work with AI thoughtfully, not just use it.




