Options
Improving clinical documentation: automatic inference of ICD-10 codes from patient notes using BERT model
Journal
The Journal of Supercomputing
Date Issued
2023
Author(s)
Al-Bashabsheh, Emran
Alaiad, Ahmad
Al-Ayyoub, Mahmoud
Beni-Yonis, Othman
Abualigah, Laith
Abstract
Electronic health records provide a vast amount of text health data written by physicians as patient clinical notes. The world health organization released the international classification of diseases version 10 (ICD-10) system to monitor and analyze clinical notes. ICD-10 is system physicians and other healthcare providers use to classify and code all diagnoses and symptom records in conjunction with hospital care. Therefore, the data can be easily stored, retrieved, and analyzed for decision-making. In order to address the problem, this paper introduces a system to classify the clinical notes to ICD-10 codes. This paper examines 7541 clinical notes collected from a health institute in Jordan and annotated by ICD-10’s coders. In addition, the research uses another outsource dataset to augment the actual dataset. The research presented many approaches, such as the baseline and pipeline models. The Baseline model employed several methods like Word2vec embedding for representing the text. The model structure also involves long-short-term memory a convolutional neural network, and two fully-connected layers. The second Pipeline approach adopts the transformer model, such as Bidirectional Encoder Representations from Transformers (BERT), which is pre-trained on a similar health domain. The Pipeline model builds on two BERT models. The first model classifies the category codes representing the first three characters of ICD-10. The second BERT model uses the outputs from the general BERT model (first model) as input for the special BERT (second model) to classify the clinical notes into total codes of ICD-10. Moreover, Baseline and Pipeline models applied the Focal loss function to eliminate the imbalanced classes. However, The Pipeline model demonstrates a significant performance by evaluating it over the F1 score, recall, precision, and accuracy metric, which are 92.5%, 84.9%, 91.8%, and 84.97%, respectively.
Scopus© citations
2
Acquisition Date
Dec 11, 2024
Dec 11, 2024