You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some believe semantic memory lives in temporal neocortex. Others believe that semantic knowledge is widely distributed across all brain areas. To illustrate this latter view, consider your knowledge of dogs. Researchers holding the 'distributed semantic knowledge' view believe that your knowledge of the sound a dog makes exists in your auditory cortex, whilst your ability to recognize and imagine the visual features of a dog resides in your visual cortex. Recent evidence supports the idea that the temporal pole bilaterally is the convergence zone for unimodal semantic representations into a multimodal representation. These regions are particularly vulnerable to damage in semantic dementia, which is characterised by a global semantic deficit.
This raises the question of how to support associations that span multiple modules. You could define rules that invoke separate queries for each of the modules, but that has performance and scalability issues. We therefore should explore ideas for how to support efficient associative search that combines unimodal semantic representations into a multimodal representation.
The same Wikipedia page intriguingly says:
A new idea that is still at the early stages of development is that semantic memory, like perception, can be subdivided into types of visual information—color, size, form, and motion. Thompson-Schill (2003) found that the left or bilateral ventral temporal cortex appears to be involved in retrieval of knowledge of color and form, the left lateral temporal cortex in knowledge of motion, and the parietal cortex in knowledge of size.
We encounter millions of objects during our lifetime that we recognize effortlessly. We know that a lime is green, round, and tart, whereas a carrot is orange, elongated, and sweet, helping us to never confuse the wedge on our margarita glass with our rabbit's favorite treat. One property (feature) alone is typically insufficient: Celery can also be green; tangerines are orange. Instead, we use the unique convergence of features that defines an object. How does our brain bind these sensorimotor features to form a unique memory representation?
and
Specifically, the “hub-and-spoke” model proposes that while sensory and verbal information is processed in modality-specific regions, a hub, based in the anterior temporal lobe (ATL), contains a high-dimensional modality-independent semantic space that allows computations to be based on semantic information rather than purely sensory similarities.This is analogous to a “hidden layer” in neural network models, which enables computation of nonlinear relationships between the information coded in sensory layers.
How are these different parts of the cortex connected and what functions are involved? What is the impact of the communication costs between different parts of the cortex?
To put that into context, the rule-engine buffers are analogous to HTTP clients, and the cognitive modules to HTTP servers. Efficient associative search across cognitive modules would seem to involve some form of module to module communication that works with sets of chunks to activate sub-graphs in the different modules to form a graph that spans modules. This imposes functional requirements on the inter module messaging, and should be explored via building a series of demonstrators for appropriately chosen use cases.
The text was updated successfully, but these errors were encountered:
Quoting from the Wikipedia article on Semantic Memory:
This raises the question of how to support associations that span multiple modules. You could define rules that invoke separate queries for each of the modules, but that has performance and scalability issues. We therefore should explore ideas for how to support efficient associative search that combines unimodal semantic representations into a multimodal representation.
The same Wikipedia page intriguingly says:
Here are some related quotes, from Creating Concepts from Converging Features in Human Cortex, Marc N. Coutanche and Sharon L. Thompson-Schill
and
How are these different parts of the cortex connected and what functions are involved? What is the impact of the communication costs between different parts of the cortex?
To put that into context, the rule-engine buffers are analogous to HTTP clients, and the cognitive modules to HTTP servers. Efficient associative search across cognitive modules would seem to involve some form of module to module communication that works with sets of chunks to activate sub-graphs in the different modules to form a graph that spans modules. This imposes functional requirements on the inter module messaging, and should be explored via building a series of demonstrators for appropriately chosen use cases.
The text was updated successfully, but these errors were encountered: