Johanne Mensah-Gourmel (Brest, France) , Maxime Bourgain (Sceaux, France) , Maxime Galloy (Sceaux, France) , Mario Veruete (Sceaux, France) , Sylvain Brochard (Brest, France) , Christelle Pons (Brest, France)et Arriel Benis (Holon, Israel)-
Bastien FRAUDET (RENNES, France) , Emilie Leblong Doctor (RENNES, France) , marie dandois (RENNES, France) , patrice Piette (RENNES, France) , estelle Ceze (RENNES, France) , Benoit Nicolas (RENNES, France) , Marie Babel (rennes, France) , francois Pasteau (rennes, France) , Louise Devigne (rennes, France) et Philippe Gallien (RENNES, France)
Sensory substitution of elbow proprioception to improve myoelectric control of upper limb prosthesis
Matthieu Guémann (Brétigny-sur-Orge, France)
Prediction tools for stroke rehabilitation
Cathy Stinear (Auckland, Newzelande)
There is growing interest in providing clinicians with prediction tools so that their predictions can be more systematic, accurate, and consistent. These tools typically combine clinical, demographic, and biomarker information from neurophysiological or neuroimaging assessments.
Traditional regression modelling is giving way to machine learning methods to develop prediction tools that can combine several variables in easy-to-use scores and decision trees.
This presentation will describe what makes a good prediction tool for use in clinical practice, and summarise the prediction tools that are currently available for the rehabilitation of motor impairments after stroke.
15:35

Explainable and trustworthy AI: a must-have for an effective clinical uptake
Alessandra Laura Giulia Pedrocchi (Milan, Italie)
AI in medicine has grown incredibly its impact in three directions, as supporting decision system for clinicians, for the health system to improve the workflow and reduce errors, and for the patients to process their own data for an higher awareness.
To improve their usability, a lot of effort has been recently dedicated to explain those algorithms. Barredo Arrieta et al. 2020 have proposed a clear definition of Explainable AI (XAI) as “Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand”. The aims of XAI are 1) to produce more explainable models while maintaining a high level of learning performance (e.g., prediction accuracy), and 2) to enable humans to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
In medicine, the question of explainability is even further crucial including factors other fields do not consider, such as risk and responsibilities associated to the decision, related to people lives. Beyond ethical issues, also the risk of malicious intent could be catastrophic.
A suite of XAI techniques have been proposed to make ML results interpretable and available to both clinicians and eventually also to patients, comparing results with Human Intelligence, so to eventually support the clinical decision for the best treatments, and the sharing of decision with aware patients.
The impact of AI and XAI potential in rehabilitation medicine will be discussed, based on some examples in the literature.
15:55
Hi-tech in rehabilitation medicine: ethical issues
Franco Molteni (Milan, Italie)