Introduction
The TAILab is a pioneering research center dedicated to advancing the frontiers of artificial intelligence in a manner that is ethically responsible, transparent, and aligned with societal values. At the heart of the lab's mission is the recognition that AI, while immensely powerful, must be developed and deployed in ways that earn and maintain public trust. This commitment to "Trustworthy AI" is twofold, encompassing both ethical AI and explainable AI.
Through our dedicated research in natural language processing, causality AI, and neural-symbolic AI, our lab is committed to advancing the field of AI toward a future where machines are not just intelligent but also trustworthy and comprehensible to their human counterparts. Our work symbolizes a significant step towards realizing AI systems that are ethically sound, transparent, and explainable, thereby aligning with the broader goal of responsible AI development.
Natural Language Processing
Natural Language Processing (NLP), a central research focus, empowers computers to effectively comprehend, interpret, and interact with human language. We focus on two things in NLP. The first involves creating AI models to discern and understand factual information from textual data. It is essential for tasks like fact-checking, information extraction, knowledge base population, and knowledge base completion, where the ability to accurately identify and interpret factual content in text is crucial. The second is to develop AI systems that go beyond the surface level of language processing to understand nuances, context, and the complexities of human language. This includes interpreting idioms, sarcasm, cultural references, and commonsense reasoning.
Causality Artificial Intelligence
In the realm of Causality AI, our lab is delving into the understanding of how AI can mimic human-like reasoning by identifying cause-and-effect relationships within data. This approach transcends traditional correlation-based AI models, offering a more profound understanding of how variables interact in complex systems. By integrating causal inference in AI, we aim to:
- Enhance Decision-Making: Equip AI systems with the ability to make decisions based on causal relationships, leading to more robust and reliable outcomes.
- Improve Transparency: Facilitate a clearer understanding of AI decision pathways, thus demystifying AI operations for end-users.
- Address Bias and Fairness: Identify and mitigate biases in AI systems by understanding the underlying causal mechanisms.
Neural-Symbolic Artificial Intelligence
Neural-Symbolic AI, another key area of our research, seeks to combine the learning capabilities of neural networks with the interpretability of symbolic AI. This hybrid approach aims to leverage the strengths of both neural networks (for pattern recognition and data handling) and symbolic AI (for logical reasoning and rule-based processing). Our objectives in this area include:
- Bridging Deep Learning and Symbolic Reasoning: Creating AI models that not only learn from data but also understand and manipulate symbolic representations.
- Enhancing Explainability: By integrating symbolic reasoning, we make AI's decision-making process more understandable to humans.
- Facilitating Complex Problem-Solving: Enabling AI to handle more complex, abstract tasks that require a blend of data-driven insights and logical reasoning.
Latest News
2023/12/01 Opening Ceremony of the TAILAB
Trustworthy AI Lab
Tel : +82-10-7506-5277
E-mail : inow3555@knu.ac.kr
Adress : 80 Daehak-ro, Buk-gu, Daegu, 41566, Republic of Korea