| Resum: |
The enactive approach offers a powerful theoretical lens for designing artificial intelligence (AI) systems intended to support the health and well-being of non-neurotypical individuals, including those on the autism spectrum and those with with ADHD, dyslexia, or other forms of neurodivergence. By emphasizing embodiment, relationality, and participatory sense-making, enactivism encourages AI-based interventions that are highly personalized, context-sensitive, and ethically aware. This paper explores how existing AI applications-ranging from socially assistive robots and virtual reality (VR) therapies to language-processing apps and personalized treatment planning-may be enhanced by incorporating enactivist principles. Despite their promise, practical adoption of AI technologies in real-world clinical practice remains limited, and persistent challenges such as algorithmic bias, privacy concerns, and the tendency to overlook subjective dimensions raise cautionary notes. Drawing on relevant philosophical literature, empirical studies, and cross-disciplinary debates (including the friction and potential synergies between predictive processing and enactivism), we argue that AI solutions grounded in enactivist thinking can more effectively honor user autonomy, acknowledge the embodied nature of neurodiverse cognition, and avoid reductive standardizations. This expanded, revised version integrates insights on neurodiversity, mental health paradigms, and the ethical imperatives of AI deployment, thereby offering a more comprehensive roadmap for researchers, clinicians, and system developers alike. |