The newest DeepMind AI can tackle geometry issues.

DeepMind, Google’s AI R&D lab, thinks finding novel geometric solutions may help AI systems become more adept.

AlphaGeometry, which DeepMind says can answer as many geometry problems as an International Mathematical Olympiad gold winner, was introduced today. AlphaGeometry, whose code was open-sourced this morning, answers 25 Olympiad geometry questions in the usual time limit, topping the previous state-of-the-art system’s 10.

“Solving Olympiad-level geometry problems is an important milestone in developing deep mathematical reasoning on the path toward more advanced and general AI systems,” Google AI researchers Trieu Trinh and Thang Luong stated this morning in a blog post. “We hope AlphaGeometry expands possibilities in mathematics, science, and AI.”

Why emphasize geometry? DeepMind claims that establishing mathematical theorems, such as the Pythagorean theorem, requires reasoning and the capacity to pick from several solutions. If DeepMind is right, this problem-solving method may be valuable in general-purpose AI systems.

“Demonstrating that a particular conjecture is true or false stretches the abilities of even the most advanced AI systems today,” DeepMind told Eltrys. Proving mathematical theorems is a significant milestone that demonstrates logical reasoning and the ability to uncover new information.

Training an AI system to tackle geometry issues is difficult.

Geometry training data is scarce because converting proofs into machine-readable format is difficult. Many cutting-edge generative AI models can recognize data patterns and linkages but cannot reason logically via theorems.

DeepMind offered two solutions.

In building AlphaGeometry, the lab combined a ChatGPT-like “neural language” model with a “symbolic deduction engine,” which uses rules (e.g., mathematical rules) to deduce answers. When processing big or complex information, symbolic engines can be rigid and sluggish. DeepMind avoided these challenges by having the neural model “guide” the deduction engine through geometry problem solutions.

DeepMind generated 100 million “synthetic theorems” and proofs of varied difficulty instead of training data. On synthetic data, the group trained AlphaGeometry from scratch and tested it on Olympiad geometry questions.

Diagrams with “constructs” like points, lines, and circles must be added to answer Olympiad geometry questions. AlphaGeometry’s neural model predicts which constructs to add to these issues, which its symbolic engine utilizes to make pictorial deductions to find similar answers.

Trinh and Luong claim that AlphaGeometry’s language model can offer new constructs for Olympiad geometry issues since it has so many proofs. “One system generates quick, ‘intuitive’ ideas, while the other makes rational decisions.”

AlphaGeometry’s problem-solving results, published in Nature this week, may fuel the long-running debate over whether AI systems should be built on symbol manipulation (using rules to manipulate symbols that represent knowledge) or brain-like neural networks.

Neural network proponents claim that enormous amounts of data and computing can produce intelligent behaviors like speech recognition and image production. Neural networks use statistical approximation and learning from examples to accomplish tasks, unlike symbolic systems, which define symbol-manipulating rules for specific activities like editing a line in word processing software.

Neural networks underpin strong AI systems like OpenAI’s DALL-E 3 and GPT-4. Symbolic AI advocates suggest that they may be better able to effectively encode the world’s information, reason through difficult circumstances, and “explain” their answers.

AlphaGeometry, a hybrid symbolic-neural network system like DeepMind’s AlphaFold 2 and AlphaGo, may show that symbol manipulation and neural networks are the best ways to find generalizable AI. Perhaps.

“Our long-term goal remains to build AI systems that can generalize across mathematical fields, developing the sophisticated problem-solving and reasoning that general AI systems will depend on, all the while extending human knowledge,” Trinh and Luong write. “This approach could shape how future AI systems discover math and other knowledge.”

Eltrys Team
Author: Eltrys Team

Share this article
0
Share
Shareable URL
Prev Post

As $1B industrial innovation fund evolves, Amazon targets AI, driverless cars, and Asia.

Next Post

LinkedIn introduces new job search options to help identify relevant jobs.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.