Project Detail |
Advances in wearable displays and networked devices lead to the exciting possibility that humans can transcend the senses they were born with and learn to ‘see’ the world in radically new ways. Genuinely incorporating new signals in our sensory repertoire would transform our everyday experience, from social encounters to surgery, and advance us towards a technologically-enhanced ‘transhuman’ state. In contrast, current additions to sensory streams such as navigating with GPS are far from being incorporated into our natural perception: we interpret them effortfully, like words from a foreign menu, rather than feeling them directly. In this project, we use a ground-breaking new approach to test how new sensory signals can be incorporated into the fundamental human experience. We train participants using new immersive virtual-reality paradigms developed in our lab, which give us unprecedented speed, control and flexibility. We test what is learned by comparing different mathematical model predictions with perceptual performance. This model-based approach uniquely shows when new signals are integrated into standard sensory processing. We compare neuroimaging data with model predictions to detect integration of newly-learned signals within brain circuits processing familiar signals. We test predictions that short-term changes to normal visual input can improve adult plasticity, and measure age-changes in plasticity by testing 8- to 12-year-old children. In a wide-ranging design allowing for domain-general conclusions, we work across modalities (visual, auditory, tactile) and across two fundamental perceptual problems: judging spatial layout (‘where’ objects are) and material properties (‘what’ they are made of). The work will provide fundamental insights into computational and brain mechanisms underlying sensory learning, and a platform for transcending the limits of human perception. |