AI doesn’t fail at randomness. It fails at complexity.
![A screenshot from the paper: The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Showing a composite figure illustrating how reasoning models solve the Tower of Hanoi problem, and how their performance varies with problem complexity.
Top Section – LLM Response Workflow:
On the left is a code-like LLM response showing a section with a list of disk moves (e.g., [1, 0, 2], [2, 0, 1], etc.) and a section referencing the final moves list. Arrows indicate:
Moves are extracted from the section for analysis.
Final answer is extracted from the section for measuring accuracy.
To the right, a sequence of three Tower of Hanoi diagrams represents:
Initial State: All disks stacked on peg 0.
Middle State: Disks distributed across pegs.
Target State: All disks correctly stacked on peg 2.
Each disk is color-coded and numbered for clarity.
Bottom Row – Three Line Graphs:
Left Graph: Accuracy vs. Complexity
Y-axis: Accuracy (%)
X-axis: Problem complexity (number of disks, from 1 to 20)
Two lines: Claude 3.7 (red circles) and Claude 3.7 with “thinking” mode (blue triangles).
Accuracy drops sharply for both as disk number increases, with “thinking” performing slightly better up to 8 disks.
Middle Graph: Response Length vs. Complexity
Y-axis: Token count
X-axis: Number of disks
“Thinking” responses grow rapidly in length with complexity, peaking near 8 disks.
Right Graph: Position of Error in Thought Process
Y-axis: Normalized position in the LLM’s reasoning (0 to 1)
X-axis: Complexity (1 to 15 disks)
Shows where correct vs. incorrect reasoning paths diverge; incorrect solutions typically fail earlier in the thoughts.
Background colors across all graphs denote complexity bands: yellow (easy), blue (moderate), red (hard).](https://www.ministryoftesting.com/rails/active_storage/blobs/redirect/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaHBBeGRrQVE9PSIsImV4cCI6bnVsbCwicHVyIjoiYmxvYl9pZCJ9fQ==--fedc637a41c8b2a7ac496d64cff3f1fb1ff08050/Screenshot%202025-06-09%20at%2016.07.56.png)
Apple just tested the smartest "reasoning" AI Models out there: Claude 3.7 Sonnet, DeepSeek-R1, OpenAI’s o1/o3.
The verdict?
They didn’t just underperform.
They 𝗰𝗼𝗹𝗹𝗮𝗽𝘀𝗲𝗱 when things got to complex.
Even when you gave them the algorithm, they couldn’t follow it.
Worse, when tasks got harder, they 𝗿𝗲𝗮𝘀𝗼𝗻𝗲𝗱 𝗹𝗲𝘀𝘀, not more.
This confirms what many testers already feel in their gut:
AI looks smart until it has to think.
Because real reasoning isn’t just generating confident answers.
It’s about:
• Navigating uncertainty
• Spotting what’s missing
• Asking, “Wait, does this even make sense?”
And that’s what great testers do every day.
We don’t just validate that something works.
We question 𝘄𝗵𝘆, 𝗵𝗼𝘄, 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 could break it next.
AI can make us more productive.
But when complexity scales, 𝘁𝗵𝗲 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘁 the reasoning engine.
𝗬𝗼𝘂 𝗮𝗿𝗲.
Original Paper: https://machinelearning.apple.com/research/illusion-of-thinking