The study presents a major step forward in creating AI systems that operate with greater energy efficiency and advanced cognitive functions.
Author: Neuroscience News
Categories
AI’s Leap in Predicting Life Events
The study's findings open new dialogues in the intersection of social, health sciences, and AI's role in our future.
A new study reveals the significant role of the CD300f immune receptor in determining life expectancy and healthy aging in mice.
Repetition in the brain gives rise to two peculiar phenomena: déjà vu and its lesser-known counterpart, jamais vu.
Insights from the study could help improve continuous learning in AI systems, advancing their capabilities to mimic human learning processes and enhance performance.
On a cellular level, the marmoset's hippocampal regions show selectivity for 3D view and head direction, suggesting that gaze, not place, is key to their spatial navigation.
The AI agent was found to learn spatial information more effectively when replaying these prioritized sequences, offering valuable insight into the way our brains learn and process information.
EPFL researchers have developed a novel machine learning algorithm called CEBRA, which can predict what mice see based on decoding their neural activity. The algorithm maps brain activity to specific frames and can predict unseen movie frames directly from brain signals alone after an initial training period. CEBRA can also be used to predict movements of the arm in primates and to reconstruct the positions of rats as they move around an arena, suggesting potential clinical applications.
Artificial intelligence (AI) systems can process signals similar to how the brain interprets speech, potentially helping to explain how AI systems operate. Scientists used electrodes on participants' heads to measure brain waves while they listened to a single syllable and compared that brain activity to an AI system trained to learn English, finding that the shapes were remarkably similar, which could aid in the development of increasingly powerful systems.
Researchers have developed a wearable interface called EchoSpeech, which recognizes silent speech by tracking lip and mouth movements through acoustic-sensing and AI. The device requires minimal user training and recognizes up to 31 unvocalized commands. The system could be used to give voice to those who are unable to vocalize sound or communicate silently with others.