Architecture, Not Stupidity | Peter Hughes
Every day, a familiar screenshot now circulates. Someone asks a language model how many times the letter "a" appears in the word "orange", or whether they should walk to a car wash 100 metres away. The response seems wrong. The conclusion follows quickly: AI has no common sense. These examples are not evidence of stupidity. They are evidence of architecture. The two sentence explanation Large language models do not "see" words as letters. They process tokens and predict the next token, so counting letters or inferring an unstated constraint can fail unless the question forces explicit reasoning. The mechanics Most modern language models are built on the Transformer architecture. They take a sequence of tokens, convert those tokens into vector embeddings, and use attention mechanisms to model relationships across the sequence. From there, they generate a probability distribution for the next token. This produces fluent, coherent language. But it is not the same as executing a deterministic procedure on characters. Tokenisation typically uses subword methods such as Byte Pair Encoding. A word like "orange" is commonly represented as a single token rather than six individual characters. When you ask for letter counts, you are requesting a character level operation, while the model is operating over token embeddings. It can often simulate the correct reasoning steps, but it is not inherently running a loop over characters. The "car wash" example illustrates a similar boundary. The model sees a short distance and predicts based on patterns that associate short distances with walking. Humans infer an unstated constraint: a car wash requires the car to be present. That inference depends on embodied world knowledge and goal modelling, which language models approximate only indirectly through patterns in text. Why does this improve in newer systems? Techniques such as reinforcement learning from human feedback encourage models to follow instructions more reliably. Prompti...
peter.hughes.team