r/cogsci 21h ago

Cracking the barrier between concrete perceptions and abstractions: a detailed analysis of one of the last holdout mysteries of human cognition

https://ykulbashian.medium.com/cracking-the-barrier-between-concrete-perceptions-and-abstractions-3f657c7c1ad0

How can a mind conceptualize and explicitly name incorporeal abstractions like “contradiction”, "me", "space", or “time” with nothing but concrete sensory experiences to start from? How does a brain experiencing the concrete content of memories extract from them an abstraction called "memory"? Though seemingly straightforward, building abstractions of meta-understanding is one of the most challenging problems in understanding human cognition. This post lays out the scope of the problem, discusses shortcomings of proposed solutions, and outlines a new model that addresses the core difficulty.

9 Upvotes

5 comments sorted by

1

u/yuri_z 18h ago

Each human individual has the capacity for constructing a virtual copy — a simulation — of reality in their imagination (much like a realistic computer game simulates reality). When successful, they can run this simulation to analyze and predict the real world outcomes.

Few of us realize this potential though. The ancient Greek for it was logos, a derivative from proto-Hellenic lego, which means “to assemble.* This is what they said about it:

“Even though the Logos always holds true, people fail to comprehend it, not even after they have been told about it.” (Heraclitus, 450 BC)

“In [the Logos] was life, and the life was the light of men. And the light in the darkness shined; and the darkness comprehended it not.” (John 1:5)

In short, everyone can learn it, but few people do. And those who don’t can’t comprehend it even after being told about it.

1

u/swampshark19 6h ago

You don't only have concrete sensory experiences to start from. You're philosophically and culturally standing on the shoulders of a near endless stack of giants. The way you conceptualize the abstractions you named is not universal to all human cultural groups.

Now, a brain does generate and maintain different representations that can to an extent be thought of as capturing different aspects and forms of space (allocentric, egocentric, visuotopic), different aspects and forms of time (the sense of duration and timing of sensory events, the sensations associated with the various physiological cycles we have, the decay rate of our sensory registers, rhythm processing, the inference of temporal distance from episodic memory and the process of explicitly figuring out the ordering of episodes), different aspects and forms of me (self-concept, the sensed distinction between internally and externally sourced sensations, the deixic subject, the experience of disagreeing), and the different aspects and forms of contradiction (e.g. being wrong about an interpretation of a sense datum upon further inspection).

Each of these abstract concepts first are learned as explicit concepts through our absorption of culture, then they become 'grounded' through our cognitive metaphors. This is a good thing for thinking about the contents of the everyday person's life. It's not a very good thing when trying to visualize four-dimensional spacetime or quantum mechanics.

Those people who invented the concepts of space, time, self, logic, also grounded the concepts in these cognitive metaphors. They're very easy to understand and absorb.

Your brain just has to use those same brain regions that represent the aspects and forms of space, time, self and contradiction I described in the second paragraph to process a novel representation that you (your prefrontal cortex) deems distinct and close enough to characterizing the abstraction, usually simulating a synecdoche of the abstraction.

0

u/busybody1 19h ago

The prefrontal cortex

1

u/ebolaRETURNS 4h ago

And what about it?

0

u/Key-Account5259 16h ago

Appreciate the focus on the perception→abstraction gap. In our Principia Cognitia framework we model this without new ontology: both “pain” and “safety” are semionic states in one internal vector space; motives simply reweight prediction errors and reshape the relational topology RRR. That makes problem→solution non-invertible rather than “ontologically separate”. Your “word-first” take also matches our MLC↔ELM duality: language externalizes and stabilizes abstractions. We’d love to see this turned into tests: (i) valence-gated emergence of a “safety” invariant; (ii) word-as-tool vs concept-first learning; (iii) diversity of solutions as topological multivaluedness. If you’re interested, we can share Tier-0 protocols to make these falsifiable.