ISPRM

Postée le 01/12/2022

15:00

Une enquête nationale sur les besoins en innovations pour faciliter la vie quotidienne des enfants en situation de handicap : une analyse basée sur l’IA

Johanne Mensah-Gourmel (Brest, France) , Maxime Bourgain (Sceaux, France) , Maxime Galloy (Sceaux, France) , Mario Veruete (Sceaux, France) , Sylvain Brochard (Brest, France) , Christelle Pons (Brest, France)et Arriel Benis (Holon, Israel)-

 


15:05

SWADAPT2 :efficacité d'un module d'assistance robotique à la conduite pour des personnes en situation de handicap neurologique.

Bastien FRAUDET (RENNES, France) , Emilie Leblong Doctor (RENNES, France) , marie dandois (RENNES, France) , patrice Piette (RENNES, France) , estelle Ceze (RENNES, France) , Benoit Nicolas (RENNES, France) , Marie Babel (rennes, France) , francois Pasteau (rennes, France) , Louise Devigne (rennes, France) et Philippe Gallien (RENNES, France)

 


09:50

Prediction tools for stroke rehabilitation

Cathy Stinear (Auckland, Newzelande) 
 

Rehabilitation plans for individual patients are based on several factors, including the patient’s expected potential for recovery. Clinicians make predictions every day about their patients’ expected response to different therapies and likely recovery. These predictions are based on clinical assessment and judgement. Clinicians’ predictions play a major role in determining whether a patient has access to rehabilitation services, and the overall course of their post-stroke care.
There is growing interest in providing clinicians with prediction tools so that their predictions can be more systematic, accurate, and consistent. These tools typically combine clinical, demographic, and biomarker information from neurophysiological or neuroimaging assessments.
Traditional regression modelling is giving way to machine learning methods to develop prediction tools that can combine several variables in easy-to-use scores and decision trees.
This presentation will describe what makes a good prediction tool for use in clinical practice, and summarise the prediction tools that are currently available for the rehabilitation of motor impairments after stroke.

15:35

Explainable and trustworthy AI: a must-have for an effective clinical uptake

Alessandra Laura Giulia Pedrocchi (Milan, Italie)

AI in medicine has grown incredibly its impact in three directions, as supporting decision system for clinicians, for the health system to improve the workflow and reduce errors, and for the patients to process their own data for an higher awareness.
To improve their usability, a lot of effort has been recently dedicated to explain those algorithms. Barredo Arrieta et al. 2020 have proposed a clear definition of Explainable AI (XAI) as “Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand”.  The aims of XAI are 1) to produce more explainable models while maintaining a high level of learning performance (e.g., prediction accuracy), and 2) to enable humans to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
In medicine, the question of explainability is even further crucial including factors other fields do not consider, such as risk and responsibilities associated to the decision, related to people lives. Beyond ethical issues, also the risk of malicious intent could be catastrophic.
A suite of XAI techniques have been proposed to make ML results interpretable and available to both clinicians and eventually also to patients, comparing results with Human Intelligence, so to eventually support the clinical decision for the best treatments, and the sharing of decision with aware patients.
The impact of AI and XAI potential in rehabilitation medicine will be discussed, based on some examples in the literature. 

 

15:55 

 Hi-tech in rehabilitation medicine: ethical issues

Franco Molteni (Milan, Italie)