@import CSS rules to specify families
Featured Image
September 30, 2025
 · 
17 min read

Why Voice Will Revolutionize Cognitive Health: The Scientific Case for the Fifth Vital Sign

Voice reveals more than words, it captures the intricate symphony of brain activity that drives every conversation, every thought, every memory. While billions are invested in brain scans and blood tests, the most powerful biomarker for cognitive health might be something you use every day. The technology to capture it is already in your pocket.

The $1 Trillion Problem: Late Detection in an Era of Early Treatment

Every year, 7.2 million Americans live with Alzheimer's disease, with healthcare costs projected to hit $1 trillion by 2050 (Alzheimer's Association, 2025). Yet our diagnostic system is fundamentally broken. We identify patients only after irreversible damage has occurred, missing the narrow window when new treatments like Lecanemab (Leqembi®) and Donanemab (Kisunla™) can help.

The crushing reality: these new FDA-approved treatments only work during mild cognitive impairment (MCI) and early dementia stages, when brain tissue is still salvageable (Van Dyck et al., 2023). But only 8% of people with MCI receive a timely diagnosis (Liu et al., 2024). By the time most patients reach specialists, they've moved beyond the treatable stage. Meanwhile, modeling by the RAND Corporation shows that approximately 2.1 million people with MCI will develop dementia while waiting for treatment between 2020-2040 due to specialist capacity constraints (Liu et al., 2017).

The fundamental problem: we lack tools that can both identify decline early and intervene immediately at population scale.

The Neuroscience of Speech: Why Voice Outperforms Cognitive Tests

The question "Why is voice better than doing a puzzle?" reveals an important distinction: voice and cognitive games operate on entirely different neurological pathways. Voice captures comprehensive brain function during naturalistic communication, while puzzles test isolated domains under artificial conditions.

Recent neuroscientific advances explain voice biomarkers' remarkable sensitivity. When you speak, your brain coordinates an intricate symphony involving multiple distributed neural networks:

The Speech Production Network:

  • Broca's area (left frontal cortex) orchestrates speech production and grammatical processing (Hickok & Poeppel, 2007)
  • Wernicke's area (left temporal cortex) handles word comprehension and semantic retrieval (Binder et al., 2009)
  • Arcuate fasciculus connects these language centers through white matter tracts vulnerable to early neurodegeneration (Catani et al., 2005)
  • Motor cortex coordinates articulation of over 100 laryngeal, orofacial, and respiratory muscles (Simonyan & Horwitz, 2011)
  • Basal ganglia fine-tune rapid, rhythmic movements of the vocal apparatus (Kotz & Schwartze, 2010)
  • Cerebellum provides motor control and timing, while recent research reveals its role in social cognition and dopamine signaling (Carta et al., 2019). Age-related cerebellar changes may explain why voice biomarkers detect not just cognitive decline but also emotional and social changes.

Executive Control Networks:

  • Prefrontal cortex manages working memory and executive function (Baddeley, 2012). Critically, executive dysfunction often appears years before memory problems in many individuals, making voice's sensitivity to executive networks particularly valuable for early detection.
  • Attention networks sustain focus while filtering irrelevant linguistic stimuli (Petersen & Posner, 2012)
  • Memory systems retrieve words and maintain context across sentences (Squire & Kandel, 2009)

Autonomic Integration: Voice uniquely captures breath control through the autonomic nervous system. Respiratory patterns during speech reflect emotional regulation, stress response, and cognitive load (Homma & Masaoka, 2008). Breath control is fundamental, it's how the brain regulates itself moment to moment. Disruptions in breathing patterns often precede detectable cognitive changes, making voice a window into autonomic dysfunction that occurs early in neurodegeneration (Zelano et al., 2016).

A puzzle might test one cognitive domain. Voice tests them all simultaneously while capturing the autonomic foundation that sustains cognitive function.

Voice as a Validated Biomarker: The Scientific Foundation

The scientific evidence overwhelmingly establishes voice as a valid and reliable biomarker for cognitive health, with diagnostic accuracies exceeding 90% in controlled conditions and consistent performance across diverse populations.

Diagnostic Power: Voice biomarkers capture preclinical decline that is invisible to standard instruments. Cross-sectional and case-control studies consistently show discriminative accuracy in distinguishing cognitively normal adults from those with MCI or dementia, with AUC values frequently above 0.80-0.90 (Lin et al., 2020; Mahon & Lachman, 2022). In the Framingham cohort, linguistic markers predicted progression to dementia up to seven years before diagnosis (Eyigoz et al., 2020; Amini et al., 2023), a predictive horizon that enables intervention during the therapeutic window when treatments can actually help.

Recent feature-level work demonstrates that articulatory and phonetic precision (Xu et al., 2025) and spectral and prosodic characteristics (Zhao et al., 2025) are particularly sensitive to early impairment, strengthening mechanistic validity. A systematic review concluded that speech-based biomarkers offer diagnostic performance comparable to or surpassing MoCA and MMSE, while being faster, less resource-intensive, and more scalable (de la Fuente Garcia et al., 2020).

Monitoring Capability: Unlike neuropsychological testing, imaging, or CSF assays, voice can be collected repeatedly and non-invasively, enabling longitudinal tracking of cognitive function in daily life. Studies show that temporal and prosodic features, pause duration, speech rate, intonation, change gradually over time and can be monitored continuously in naturalistic environments (König et al., 2018; de Looze et al., 2022).

Recent longitudinal analysis demonstrated that day-to-day acoustic variability correlates with cognitive performance and sleep quality, highlighting its value as an ecological monitoring tool (Ding et al., 2024). Importantly, research studies have demonstrated feasibility in home-based settings: multi-day tablet protocols in older adults achieved over 90% adherence and strong test-retest reliability (van den Berg et al., 2024), and telephone-based speech tasks completed at home showed reliable automated transcription and discrimination of impairment (König et al., 2024). These controlled studies prove the technology works outside clinical settings, but translating research protocols into consumer-ready platforms accessible to millions requires infrastructure that doesn't yet exist.

Prognostic Accuracy: Beyond detection and monitoring, voice biomarkers forecast disease progression. Longitudinal studies demonstrate that individuals with subtle lexical and fluency impairments at baseline are significantly more likely to progress from MCI to dementia (Mahon & Lachman, 2022; Lin et al., 2020). In the MIDUS cohort, acoustic changes predicted cognitive decline trajectories over a decade of follow-up (Slegers et al., 2018).

Machine learning models trained on longitudinal speech data have further improved prognostic accuracy, correctly stratifying patients by likelihood of conversion (Fraser et al., 2016; Meilán et al., 2020). Recent innovations such as character-level symbolic recurrence achieve high diagnostic accuracy while providing interpretable biomarkers (Mekulu et al., 2025), moving voice into the domain of predictive analytics essential for therapeutic timing decisions.

Technological Maturity: Contemporary research integrates advanced signal processing and artificial intelligence methods that extend voice biomarker science beyond handcrafted acoustic features. Techniques such as speaker diarization, automatic speech recognition, and end-to-end deep learning architectures have enabled robust analysis in real-world, multi-speaker environments. The incorporation of natural language processing (NLP) features alongside acoustic measures has demonstrated particular promise. In Framingham data, a multimodal system combining audio and text features achieved an AUC of 0.92, compared to 0.79 using demographics alone (Alhanai et al., 2017). A systematic review confirmed this synergy across more than 200 studies, concluding that multimodal approaches offer the strongest path toward clinical translation (Shakeri & Farmanbar, 2025).

Broad Validation Beyond Cognitive Decline: Voice biomarkers are also established in Parkinson's disease (telemonitoring of motor progression) (Little et al., 2009; Suppa et al., 2022), depression (assessing symptom severity and treatment response) (Almaghrab et al., 2023), and cardiovascular disease. In cardiology, a prospective study found that a higher noninvasive voice biomarker was significantly associated with incident coronary artery disease events, patients in the highest biomarker tertile had over two times the risk (Sara et al., 2022).

Cross-Cultural Generalizability: Critically, these findings have been replicated across multiple languages and cultural contexts (Slegers et al., 2018; López-de-Ipiña et al., 2013), underscoring generalizability and addressing the cultural bias inherent in traditional assessments. Voice biomarkers work across languages and don't rely on education-dependent tasks, transforming access for multilingual populations and those with lower formal education.

Voice vs. Traditional Approaches: Solving Structural Problems

Despite decades of progress, today's diagnostic instruments were designed for episodic case-finding rather than proactive brain health management.

Traditional Cognitive Testing Falls Short: Conventional cognitive assessments like the MMSE, MoCA, and Mini-Cog are inexpensive and widely used but catastrophically poorly suited for detecting subtle, preclinical changes. The MMSE shows only 64.8% sensitivity for detecting MCI, missing more than one-third of early cases (Mitchell, 2009). Mini-Cog performs even worse, with positive predictive values of just 0.39 and specificity of 0.38 (Alzheimer's Association, 2025).

These tests share fatal flaws: they provide episodic snapshots only, missing daily or monthly fluctuations; they introduce education and language bias, disproportionately misclassifying multilingual and lower-education populations (Manly et al., 2002); they lack ecological validity; and they're designed for confirmation, not detection, precisely the opposite of what's needed (Arevalo-Rodriguez et al., 2015).

Advanced Biomarkers: Accurate but Inaccessible: Blood-based biomarkers (p-tau217, Aβ42/40, NfL) represent a promising advance (Hampel et al., 2018), but clinical implementation remains uneven, cut-offs aren't standardized, and most tests are ordered only after symptoms appear. PET imaging and CSF assays provide pathophysiologic confirmation but face insurmountable barriers: PET scans exceed $5,000 per scan (Rabinovici et al., 2019); they require specialized facilities unavailable in most communities; and they're invasive procedures "too burdensome for repeat use" (Jack et al., 2018).

Voice: The Structural Solution: Voice biomarkers solve these fundamental problems. They work at home, in naturalistic environments where people actually function. They enable broad access for multilingual populations, rural communities, and military families. They provide daily, self-administered vital sign monitoring enabling continuous tracking. They capture multi-dimensional biomarkers—over 500 features—detecting changes 3-5 years before symptom onset (Eyigoz et al., 2020). They deliver risk stratification with individualized trajectory forecasting. And they scale via consumer devices already in most homes.

Establishing Voice as the Fifth Vital Sign

The evidence is clear. Voice biomarkers detect cognitive decline years before traditional methods, work across languages and demographics, and achieve diagnostic accuracy exceeding 90% in controlled studies.

Yet this breakthrough remains locked in labs and clinics. The gap between what's scientifically possible and what's practically accessible represents one of healthcare's most urgent missed opportunities. Millions continue to decline without early detection while the technology that could help them sits unused.

Blood pressure monitors didn't become ubiquitous because the science was compelling. They became ubiquitous because someone made them cheap, simple, and available everywhere. Glucose meters transformed diabetes management through accessible technology that let people measure at home rather than waiting for quarterly lab visits.

Voice biomarkers face the same inflection point. The scientific validation is complete. The clinical need is overwhelming. The economic imperative—preventing rather than treating dementia—could save healthcare systems trillions. But none of this matters without infrastructure that reaches people before symptoms force them into clinics.

Vibes AI is building that infrastructure.

Rather than requiring new hardware, specialized clinics, or behavior change, we're leveraging technology people already own: smartphones, tablets, smart speakers, earbuds. A 2-minute voice assessment extracting over 500 acoustic, linguistic, and prosodic features that map directly onto the neural networks disrupted in early cognitive decline. No hardware purchase required to start. No specialist referrals. No insurance pre-authorization. Just accessible cognitive monitoring through technology already in 92% of American households.

The shift from episodic clinic visits to continuous home-based monitoring doesn't happen spontaneously. It requires:

  • Accessible Technology: Algorithms validated in laboratory conditions but optimized to run on consumer devices in naturalistic environments with background noise, conversational speech, and real-world variability.
  • Scalable Infrastructure: Moving from hundreds of research participants to millions of daily users demands robust systems for data processing, privacy protection, and longitudinal tracking.
  • Consumer-First Design: We're reimagining brain health as a daily micro-ritual: lightweight, personal, and powered by community. Features like Vibes Tribes transform isolated health tracking into social discovery, revealing who you naturally sync with based on vocal biomarkers. When brain health tracking feels less like a medical chore and more like discovering others vibing on your wavelength, daily engagement becomes effortless. It's cognitive care that meets the moment: social, intuitive, and joyfully preventative.
  • Validation Across Populations: Findings must translate across the full diversity of human speech: different languages, accents, dialects, age groups, and cognitive baselines.

This accessibility transforms who can benefit from early detection:

  • Rural communities facing neurologist shortages and 6-month wait times can access daily cognitive monitoring through their smartphones
  • Multilingual populations historically misdiagnosed by education-dependent tests receive culturally unbiased assessment through voice patterns that transcend language barriers
  • Younger adults concerned about cognitive wellness—sleep deprivation, digital overload, stress—gain proactive monitoring before clinical symptoms emerge
  • Caregivers managing loved ones across distance can track subtle changes that might signal the need for clinical evaluation

Disease-modifying treatments have been approved for Alzheimer's, but they only work during narrow therapeutic windows. Every day we delay accessible early detection, we close that window for thousands more people. The question isn't whether voice will become the fifth vital sign. The evidence makes that inevitable. The question is who will build the infrastructure to make it real. 

Voice biomarkers will revolutionize cognitive health by solving the fundamental challenge: detecting decline when intervention can still change outcomes.

Your voice has been speaking to your brain health all along.

The technology to listen exists.

Making it accessible to everyone, that's the work ahead.

This is the first in a series exploring the science behind Vibes AI. Future articles will examine how sound can restore what voice reveals, and how combining measurement with intervention creates a closed loop for proactive brain health.

References

Alhanai, T., Au, R., & Glass, J. (2017). Spoken language biomarkers for detecting cognitive impairment. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (pp. 409-416). https://doi.org/10.1109/ASRU.2017.8268965

Almaghrab, S. A., Clark, S. R., & Baumert, M. (2023). Bio-acoustic features of depression: A review. Biomedical Signal Processing and Control, 85, 105020. https://doi.org/10.1016/j.bspc.2023.105020

Alzheimer's Association. (2025). 2025 Alzheimer's Disease Facts and Figures. Alzheimer's & Dementia, 21(5). https://www.alz.org/getmedia/ef8f48f9-ad36-48ea-87f9-b74034635c1e/alzheimers-facts-and-figures.pdf

Amini, S., Hao, B., Zhang, L., Song, M., Gupta, A., Karjadi, C., Kolachalama, V. B., Au, R., & Paschalidis, I. C. (2023). Automated detection of mild cognitive impairment and dementia from voice recordings: A natural language processing approach. Alzheimer's & dementia : the journal of the Alzheimer's Association, 19(3), 946–955. https://doi.org/10.1002/alz.12721

Arevalo-Rodriguez, I., Smailagic, N., Roqué I Figuls, M., Ciapponi, A., Sanchez-Perez, E., Giannakou, A., Pedraza, O. L., Bonfill Cosp, X., & Cullum, S. (2015). Mini-Mental State Examination (MMSE) for the detection of Alzheimer's disease and other dementias in people with mild cognitive impairment (MCI). Cochrane Database of Systematic Reviews, 2015(3), CD010783. https://doi.org/10.1002/14651858.CD010783.pub2

Baddeley, A. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 1-29. https://doi.org/10.1146/annurev-psych-120710-100422

Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis. Cerebral Cortex, 19(12), 2767-2796. https://doi.org/10.1093/cercor/bhp055

Carta, I., Chen, C. H., Schott, A. L., Dorizan, S., & Khodakhah, K. (2019). Cerebellar modulation of the reward circuitry and social behavior. Science, 363(6424), eaav0581. https://doi.org/10.1126/science.aav0581

Catani, M., Jones, D. K., & ffytche, D. H. (2005). Perisylvian language networks of the human brain. Annals of Neurology, 57(1), 8-16. https://doi.org/10.1002/ana.20319

de la Fuente Garcia, S., Ritchie, C. W., & Luz, S. (2020). Artificial Intelligence, Speech, and Language Processing Approaches to Monitoring Alzheimer's Disease: A Systematic Review. Journal of Alzheimer's Disease, 78(4), 1547-1574. https://doi.org/10.3233/JAD-200888

De Looze, C., Dehsarvi, A., Suleyman, N., Crosby, L., Hernández, B., Coen, R. F., Lawlor, B. A., & Reilly, R. B. (2022). Structural Correlates of Overt Sentence Reading in Mild Cognitive Impairment and Mild-to-Moderate Alzheimer's Disease. Current Alzheimer research, 19(8), 606–617. https://doi.org/10.2174/1567205019666220805110248

Ding, H., Lister, A., Karjadi, C., Hobbs, M. A., Lin, H., Hardy, S. E., McManus, C., Wasserman, B. A., Dhand, A., Au, R., & Alhanai, T. (2024). Detection of Mild Cognitive Impairment From Non-Semantic, Acoustic Voice Features: The Framingham Heart Study. JMIR Aging, 7, e55126. https://doi.org/10.2196/55126

Eyigoz, E., Mathur, S., Santamaria, M., Cecchi, G., & Naylor, M. (2020). Linguistic markers predict onset of Alzheimer's disease. EClinicalMedicine28, 100583. https://doi.org/10.1016/j.eclinm.2020.100583

Fraser, K. C., Meltzer, J. A., & Rudzicz, F. (2016). Linguistic Features Identify Alzheimer's Disease in Narrative Speech. Journal of Alzheimer's Disease, 49(2), 407-422. https://doi.org/10.3233/JAD-150520

Hampel, H., O'Bryant, S. E., Molinuevo, J. L., Zetterberg, H., Masters, C. L., Lista, S., Kiddle, S. J., Batrla, R., & Blennow, K. (2018). Blood-based biomarkers for Alzheimer disease: Mapping the road to the clinic. Nature Reviews Neurology, 14(11), 639-652. https://doi.org/10.1038/s41582-018-0079-7

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393-402. https://doi.org/10.1038/nrn2113

Homma, I., & Masaoka, Y. (2008). Breathing rhythms and emotions. Experimental Physiology, 93(9), 1011-1021. https://doi.org/10.1113/expphysiol.2008.042424

Jack, C. R., Jr., Bennett, D. A., Blennow, K., Carrillo, M. C., Dunn, B., Haeberlein, S. B., Holtzman, D. M., Jagust, W., Jessen, F., Karlawish, J., Liu, E., Molinuevo, J. L., Montine, T., Phelps, C., Rankin, K. P., Rowe, C. C., Scheltens, P., Siemers, E., Snyder, H. M., … Silverberg, N. (2018). NIA-AA Research Framework: Toward a biological definition of Alzheimer's disease. Alzheimer's & Dementia, 14(4), 535-562. https://doi.org/10.1016/j.jalz.2018.02.018

König, A., Linz, N., Tröger, J., Wolters, M., Alexandersson, J., & Robert, P. (2018). Fully Automatic Speech-Based Analysis of the Semantic Verbal Fluency Task. Dementia and geriatric cognitive disorders, 45(3-4), 198–209. https://doi.org/10.1159/000487852

König, A., Köhler, S., Tröger, J., Düzel, E., Glanz, W., Butryn, M., Mallick, E., Priller, J., Altenstein, S., Spottke, A., Kimmich, O., Falkenburger, B., Osterrath, A., Wiltfang, J., Bartels, C., Kilimann, I., Laske, C., Munk, M. H., Roeske, S., Frommann, I., … Teipel, S. (2024). Automated remote speech-based testing of individuals with cognitive decline: Bayesian agreement of transcription accuracy. Alzheimer's & dementia (Amsterdam, Netherlands), 16(4), e70011. https://doi.org/10.1002/dad2.70011

Kotz, S. A., & Schwartze, M. (2010). Cortical speech processing unplugged: A timely subcortico-cortical framework. Trends in Cognitive Sciences, 14(9), 392-399. https://doi.org/10.1016/j.tics.2010.06.005

Lin, H., Karjadi, C., Ang, T. F. A., Prajakta, J., McManus, C., Alhanai, T. W., Glass, J., & Au, R. (2020). Identification of digital voice biomarkers for cognitive health. Exploration of Medicine, 1, 406-417. https://doi.org/10.37349/emed.2020.00028

Little, M. A., McSharry, P. E., Hunter, E. J., Spielman, J., & Ramig, L. O. (2009). Suitability of Dysphonia Measurements for Telemonitoring of Parkinson's Disease. IEEE Transactions on Biomedical Engineering, 56(4), 1015-1022. https://doi.org/10.1109/TBME.2008.2005954

Liu, J. L., Hlavka, J. P., Hillestad, R., & Mattke, S. (2017). Assessing the Preparedness of the U.S. Health Care System Infrastructure for an Alzheimer's Treatment. RAND Corporation. https://www.rand.org/pubs/research_reports/RR2272.html

Liu, J. L., Baker, L., Chen, A. Y., & Wang, J. J. (2024). Geographic variation in shortfalls of dementia specialists in the United States. Health affairs scholar, 2(7), qxae088. https://doi.org/10.1093/haschl/qxae088

Lopez-de-Ipiña, Karmele & Travieso, Carlos & Eguiraun Martinez, Harkaitz & Ecay, M. & Ezeiza, A. & Barroso, Nora & Martinez-Lage, Pablo. (2013). Automatic analysis of emotional response based on non-linear speech modeling oriented to Alzheimer disease diagnosis. INES 2013 - IEEE 17th International Conference on Intelligent Engineering Systems, Proceedings. 61-64. 10.1109/INES.2013.6632783. 

Mahon, E., & Lachman, M. E. (2022). Voice biomarkers as indicators of cognitive changes in middle and later adulthood. Neurobiology of Aging, 119, 22-35. https://doi.org/10.1016/j.neurobiolaging.2022.06.010

Manly, J. J., Jacobs, D. M., Touradji, P., Small, S. A., & Stern, Y. (2002). Reading level attenuates differences in neuropsychological test performance between African American and White elders. Journal of the International Neuropsychological Society, 8(3), 341-348. https://doi.org/10.1017/S1355617702813157

Meilán, J. J. G., Martínez-Sánchez, F., Martínez-Nicolás, I., Carro, J., & Ivanova, O. (2020). Changes in the Rhythm of Speech Difference between People with Nondegenerative Mild Cognitive Impairment and with Preclinical Dementia. Behavioural Neurology, 2020, 4683573. https://doi.org/10.1155/2020/4683573

Mekulu, K., Aqlan, F., & Yang, H. (2025). Character-Level Linguistic Biomarkers for Precision Assessment of Cognitive Decline: A Symbolic Recurrence Approach. medRxiv [Preprint]. https://doi.org/10.1101/2025.06.12.25329529

Mitchell, A. J. (2009). A meta-analysis of the accuracy of the mini-mental state examination in the detection of dementia and mild cognitive impairment. Journal of Psychiatric Research, 43(4), 411-431. https://doi.org/10.1016/j.jpsychires.2008.04.014

Papp, K. V., Buckley, R., Mormino, E., Maruff, P., Villemagne, V. L., Masters, C. L., Johnson, K. A., Rentz, D. M., Sperling, R. A., Amariglio, R. E., & Collaborators from the Harvard Aging Brain Study, the Alzheimer's Disease Neuroimaging Initiative and the Australian Imaging, Biomarker and Lifestyle Study of Aging. (2020). Clinical meaningfulness of subtle cognitive decline on longitudinal testing in preclinical AD. Alzheimer's & Dementia, 16(3), 552-560. https://doi.org/10.1016/j.jalz.2019.09.074

Petersen, S. E., & Posner, M. I. (2012). The attention system of the human brain: 20 years after. Annual Review of Neuroscience, 35, 73-89. https://doi.org/10.1146/annurev-neuro-062111-150525

Rabinovici, G. D., Gatsonis, C., Apgar, C., Chaudhary, K., Gareen, I., Hanna, L., Hendrix, J., Hillner, B. E., Olson, C., Lesman-Segev, O. H., Romanoff, J., Siegel, B. A., Whitmer, R. A., & Carrillo, M. C. (2019). Association of Amyloid Positron Tomography With Subsequent Change in Clinical Management Among Medicare Beneficiaries With Mild Cognitive Impairment or Dementia. JAMA, 321(13), 1286-1294. https://doi.org/10.1001/jama.2019.2000

Sara, J. D. S., Maor, E., Orbelo, D., Gulati, R., Lerman, L. O., & Lerman, A. (2022). Noninvasive voice biomarker is associated with incident coronary artery disease events at follow-up. Mayo Clinic Proceedings, 97(5), 835-846. https://doi.org/10.1016/j.mayocp.2021.10.024

Shakeri, A., & Farmanbar, M. (2025). Natural language processing in Alzheimer's disease research: Systematic review of methods, data, and efficacy. Alzheimer's & Dementia, 17(1), e70082. https://doi.org/10.1002/dad2.70082

Simonyan, K., & Horwitz, B. (2011). Laryngeal motor cortex and control of speech in humans. The Neuroscientist, 17(2), 197-208. https://doi.org/10.1177/1073858410386727

Slegers, A., Filiou, R. P., Montembeault, M., & Brambati, S. M. (2018). Connected Speech Features from Picture Description in Alzheimer's Disease: A Systematic Review. Journal of Alzheimer's Disease, 65(2), 519-542. https://doi.org/10.3233/JAD-170881

Squire, L. R., & Kandel, E. R. (2009). Memory: From mind to molecules (2nd ed.). Scientific American Library.

Suppa, A., Costantini, G., Asci, F., Di Leo, P., Al-Wardat, M. S., Di Lazzaro, G., Scalise, S., Pisani, A., & Saggio, G. (2022). Voice in Parkinson's Disease: A Machine Learning Study. Frontiers in Neurology, 13, 831428. https://doi.org/10.3389/fneur.2022.831428

van den Berg, E., Menger, S., Jansen, W., Hendriksen, H., Pijnenburg, Y., Scheltens, P., & Kester, M. I. (2024). Multi-day at-home assessments of speech acoustics in Dutch cognitively normal adults. Alzheimer's & Dementia, 20, e087304. https://doi.org/10.1002/alz.087304

Van Dyck, C. H., Swanson, C. J., Aisen, P., Bateman, R. J., Chen, C., Gee, M., Kanekiyo, M., Li, D., Reyderman, L., Cohen, S., Froelich, L., Katayama, S., Sabbagh, M., Vellas, B., Watson, D., Dhadda, S., Irizarry, M., Kramer, L. D., & Iwatsubo, T. (2023). Lecanemab in Early Alzheimer's Disease. New England Journal of Medicine, 388(1), 9-21. https://doi.org/10.1056/NEJMoa2212948

 Xu, L., Chen, K., Mueller, K. D., Liss, J., & Berisha, V. (2025). Articulatory precision from connected speech as a marker of cognitive decline in Alzheimer's disease risk-enriched cohorts. Journal of Alzheimer's disease : JAD, 103(2), 476–486. https://doi.org/10.1177/13872877241300149

Zelano, C., Jiang, H., Zhou, G., Arora, N., Schuele, S., Rosenow, J., & Gottfried, J. A. (2016). Nasal respiration entrains human limbic oscillations and modulates cognitive function. Journal of Neuroscience, 36(49), 12448-12467. https://doi.org/10.1523/JNEUROSCI.2586-16.2016

Zhao, Y., Tang, W., Liu, Y., Chen, S., Kong, Y., Cheng, Y., & Zhang, J. (2025). Objective biomarkers of cognitive performance in older adults with mild cognitive impairment: Acoustic features of affective prosody. Geriatric Nursing, 64, 103370. https://doi.org/10.1016/j.gerinurse.2025.02.018

About Vibes AI

Vibes AI is a neurotechnology company on a mission to accelerate the world's access to cognitive health & wellness. Founded in 2024, the company uses AI, neuroscience, and ancestral intelligence to create innovative solutions that make cognitive health and enhancement accessible to all. MANTRA, one of the company's flagship products, uses voice biomarker technology to detect early signs of cognitive decline and provide personalized interventions.

Comments

No Comments.

Leave a Reply

Limitless Intelligence

Join-The-Vibes-Tribe

Receive the latest brain vitality updates, research, articles, tips, and offers from Vibes Ai.

We care about protecting your data. Read more in our Privacy Policy.


© Vibes AI is trademarks of Vibes AI, Inc. and may not be used without permission.

Back to top Arrow

Discover more from Vibes AI

Subscribe now to keep reading and get access to the full archive.

Continue reading