Abstract
Deep learning models have advanced rapidly, leading to claims that they now match or exceed human performance. However, such claims are often based on closed-set conditions with fixed labels, extensive supervised training, and do not considering differences between the two systems. Recent findings also indicate that some models align more closely with human categorisation behaviour, whereas other studies argue that even highly accurate models diverge from human behaviour. Following principles from comparative psychology and imposing similar constraints on both systems, this study investigates whether these models can achieve human-level accuracy and human-like categorisation through three experiments using subsets of the ObjectNet dataset. Experiment 1 examined performance under varying presentation times and task complexities, showing that while recent models can match or exceed humans under conditions optimised for machines, they struggle to generalise to certain real-world categories without fine-tuning or task-specific zero-shot classification. Experiment 2 tested whether human performance remains stable when shifting from N-way categorisation to a free-naming task, while machine performance declines without fine-tuning; the results supported this prediction. Additional analyses separated detection from classification, showing that object isolation improved performance for both humans and machines. Experiment 3 investigated individual differences in human performance and whether models capture the qualitative ordinal relationships characterising human categorisation behaviour; only the multimodal CoCa model achieved this. These findings clarify the extent to which current models approximate human categorisation behaviour beyond mere accuracy and highlight the importance of incorporating principles from comparative psychology while considering individual differences.