185.A83 Machine Learning for Health Informatics
This course is in all assigned curricula part of the STEOP.
This course is in at least 1 assigned curriculum part of the STEOP.

2021S, VU, 2.0h, 3.0EC, to be held in blocked form
TUWEL

Properties

  • Semester hours: 2.0
  • Credits: 3.0
  • Type: VU Lecture and Exercise
  • Format: Online

Learning outcomes

After successful completion of the course, students are able to understand and explain essential aspects of Machine Learning for medical applications and to program solutions in the industry standard Python in small projects. An important aspect of this course is to raise awareness and sensitivity for ethically responsible use of AI and to consolidate corresponding knowledge.

Subject of course

Please refer to the course Webpage for updated information:

https://human-centered.ai/lv-185-a83-machine-learning-for-health-informatics-class-of-2021/

Medicine is evolving into a data-driven science. Health AI is working to effectively and efficiently use machine learning methods to solve problems in the comprehensive field of health and life sciences. This master's course takes a research-centered teaching approach. Topics covered include methods for combining human intelligence and machine intelligence to support medical decision making. Since 2018, the European General Data Protection Regulation explicitly provides for a legal "right to explanation", and the EU Parliament recently adopted a resolution on "explainable AI" as part of the European Digitization Initiative. This calls for solutions that must enable medical experts to understand, replicate and comprehend machine results. The central focus of Class 2021 is even more on making machine decisions transparent, comprehensible and interpretable for medical experts. A critical requirement for successful AI applications in the future will be that human experts must be able to at least understand the context and be able to explore the underlying explanatory factors, with the goal of answering the question WHY a particular machine decision was made. This is desirable in many domains, but mandatory in the medical domain. In addition, explainable AI should enable a healthcare professional to ask counterfactual questions, such as "what if?" questions, to also gain new insights. Ultimately, such approaches foster confidence for future solutions from artificial intelligence - which will inevitably enter everyday medical practice.

For further questions please ask directly the course director: Andreas Holzinger

 

Teaching methods

Interactive Lecture. Elaboration on practical examples. Practical hands-on programming exercises on selected problems. Python !

Mode of examination

Immanent

Additional information

ECTS-Breakdown (sum=75h, corresponds with 3 ECTS, where 1 ECTS = 25 h students workload):

a) Presence during the lecture 8 x 3 h = 24 h
b) Preparation before and after the lecture 8 x 1 h = 8 h
c) Preparation of assignments and presentation 28 h + 2 h = 30 h
d) Written exam including exam preparation 1 h + 12 h = 13 h

Sum TOTAL student's workload = 75 h

 

Lecturers

Institute

Course dates

DayTimeDateLocationDescription
Tue17:00 - 19:0023.03.2021 - 29.06.2021 (LIVE)Course
Machine Learning for Health Informatics - Single appointments
DayDateTimeLocationDescription
Tue23.03.202117:00 - 19:00 Course
Tue13.04.202117:00 - 19:00 Course
Tue20.04.202117:00 - 19:00 Course
Tue27.04.202117:00 - 19:00 Course
Tue04.05.202117:00 - 19:00 Course
Tue11.05.202117:00 - 19:00 Course
Tue18.05.202117:00 - 19:00 Course
Tue01.06.202117:00 - 19:00 Course
Tue08.06.202117:00 - 19:00 Course
Tue15.06.202117:00 - 19:00 Course
Tue22.06.202117:00 - 19:00 Course
Tue29.06.202117:00 - 19:00 Course
Course is held blocked

Examination modalities

Collaboration in the interactive parts of the course. Assessment of the programming tasks. Written final exam.

Course registration

Begin End Deregistration end
09.03.2021 17:30 23.03.2021 20:30

Registration modalities

Please enroll via TISS and additionally send an e-.Mail to Andreas Holzinger,

see Course Homepage

Curricula

Study CodeObligationSemesterPrecon.Info
066 646 Computational Science and Engineering Not specified
066 936 Medical Informatics Mandatory elective

Literature

Holzinger, A. 2014. Biomedical Informatics: Discovering Knowledge in Big Data, New York, Springer, doi:10.1007/978-3-319-04528-3.

Holzinger, A. (ed.) 2016. Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, Lecture Notes in Artificial Intelligence LNAI 9605, Cham: Springer International, doi:10.1007/978-3-319-50478-0.

Holzinger, A. 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6.

 

 

Previous knowledge

Interest in machine learning with application to health informatics with a special focus on privacy, security, data protection safety, ethical and social issues and in explainable AI [4] and the doctor-in-the-loop [5], which led to the developement of the concept of Cau-sa-bility (in accordance to Usa-bi-lity) to evaluate the quality of explanations [6, 7, 8]

[4]    Andreas Holzinger (2018): From Machine Learning to Explainable AI.  2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), 23-25 Aug. 2018 2018. 55-66, doi:10.1109/DISA.2018.8490530.

[5] Andreas Holzinger (2016): Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6

[6] Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Müller (2019). Causability and Explainability of Artificial Intelligence in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9, (4), 1-13, doi:10.1002/widm.1312

[7] Andreas Holzinger, Andre Carrington & Heimo Müller (2020). Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations. KI - Künstliche Intelligenz (German Journal of Artificial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt, 34, (2), 193-198, doi:10.1007/s13218-020-00636-z

[8] Andreas Holzinger, Bernd Malle, Anna Saranti & Bastian Pfeifer (2021). Towards Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008

https://www.sciencedirect.com/science/article/pii/S1566253521000142?dgcid=rss_sd_all

More information on the Course Homepage!

 

Miscellaneous

Language

English