How, you may wonder, could wire-twined machines act as mathematicians? I am not referring to those precise mechanical robots commanded by people, but the field of Machine Learning, where humans train a machine to actively discover on its own how to perform a variety of complex tasks without explicit programming. The more general umbrella term is Artificial Intelligence, the subject under heated discussions on media these days which mathematical researchers are relatively less interested in eavesdropping on.
Early A.I. researchers aspire to computers with human level intelligence in computation and logical reasoning. Perhaps the most direct way to understand the research efforts towards making machines able to think like a mathematician is in the school of symbolic A.I.—knowledge is formally embedded as symbols and rules for the process of logical deduction or induction. Scientists endeavor to design symbolic models capable of automated-reasoning, which can prove theorems and solve problems even beyond what mathematicians can do. Most importantly, symbolic models are transparent in logics and apt for validation. They are based on logic and reasoning which is the quintessence of maths.
Undoubtedly A.I. saw tremendous success in practise in recently years. It has become state-of-the-art technology for a wide variety of applications such as pattern recognition and game intelligence; not to mention the popular chatbot ChatGPT which reached one million users just two months after lauching, and the AlphaGo which was the first to defeat a profession human world champion in the game Go. But very few models in these success stories, if any, bear a logical ”why”—obviously the models have learned mathematical knowledge, but the logical rules are hidden inside a black box. How to unveil the logical aspect of A.I. is the next prominent question and presents an opportunity for mathematicians to make a difference in this latest transformative force.