A Comprehensive Analysis of Deep Learning Techniques for Classifying Knee Abnormalities
Divyakant Meva 1, HirenKumar
Kukadiya 2
1 Associate
Professor, Faculty of Computer Application, Marwadi University, Rajkot
(Gujarat), India
2 Assistant
Professor, Gandhinagar Institute of Computer Science and Applications,
Kathmandu, Gandhinagar University, Gandhinagar (Gujarat) 382721, India
|
ABSTRACT |
||
Knee
abnormalities represent one of the most common orthopedic conditions affecting
individuals across different age groups, significantly impacting mobility and
quality of life. For treatment planning to be successful, these anomalies
must be diagnosed promptly and accurately. Deep learning techniques have
transformed medical image analysis in the last ten years, providing promising
answers for automated knee anomaly classification from a variety of imaging
modalities. This comprehensive review examines the current state-of-the-art
deep learning techniques for knee abnormality classification, analyzing their
architectures, performance metrics, clinical applications, and limitations.
We systematically categorize these approaches based on the imaging modalities
used (MRI, X-ray, ultrasound), the specific knee abnormalities targeted, and
the underlying deep learning architectures employed. Additionally, we discuss
the challenges in this field, including limited dataset availability, class
imbalance, interpretability issues, and the gap between research and clinical
implementation. Finally, we highlight emerging trends and future research
directions that could further enhance the clinical utility of deep learning
for knee abnormality classification. |
|||
Received 15 January 2024 Accepted 17 February
2025 Published 31 March 2025 Corresponding Author Divyakant
Meva, divyakant.meva@marwadieducation.edu.in DOI 10.29121/granthaalayah.v13.i3.2025.5992 Funding: This research
received no specific grant from any funding agency in the public, commercial,
or not-for-profit sectors. Copyright: © 2025 The
Author(s). This work is licensed under a Creative Commons
Attribution 4.0 International License. With the
license CC-BY, authors retain the copyright, allowing anyone to download,
reuse, re-print, modify, distribute, and/or copy their contribution. The work
must be properly attributed to its author. |
|||
Keywords: Computer-Aided Diagnosis, Classification,
Convolutional Neural Networks, Deep Learning, Knee Abnormalities, Medical
imaging, MRI |
1. INTRODUCTION
Knee disorders,
including ligament tears, meniscal injuries, cartilage damage, and
osteoarthritis, represent a significant healthcare burden worldwide. According
to recent epidemiological studies, knee injuries account for approximately
15-50% of all sports injuries, while knee osteoarthritis affects over 250
million people globally Wang et al. (2023). Accurate diagnosis of these conditions is
essential for appropriate treatment planning and optimal patient outcomes.
The diagnosis of
knee abnormalities has historically depended on a clinical examination in
conjunction with a variety of imaging techniques, including as computed
tomography (CT), ultrasonography, X-rays, and magnetic resonance imaging (MRI).
Because of its superior contrast resolution and multiplanar imaging
capabilities, MRI has become the gold standard for assessing the knee's soft
tissue structures. However, especially in complex instances, knee MRI
interpretation can be difficult, time-consuming, and susceptible to
inter-observer variability.
Deep learning
(DL), in particular, and artificial intelligence (AI) have shown tremendous
promise in medical picture processing in recent years. In a variety of medical
imaging areas, deep learning algorithms—particularly convolutional neural
networks, or CNNs—have demonstrated remarkable performance in tasks including
classification, segmentation, and detection. Automated feature extraction, the
capacity to recognize intricate patterns, and the possibility of lowering
diagnostic mistakes and interpretation time are only a few benefits of these
methods.
The goal of this
paper is to present a thorough examination of the most advanced deep learning
methods available for classifying knee abnormalities. We comprehensively
classify and contrast these methods according to performance criteria,
architectural designs, targeted abnormalities, and imaging modalities.
Additionally, we go over the difficulties, constraints, and possible future
paths in this developing topic.
2. Methodology for Literature Review
This review
follows a systematic approach to identify, select, and critically appraise
relevant research on deep learning techniques for knee abnormality
classification. We conducted a comprehensive search across major electronic
databases, including PubMed, IEEE Xplore, ACM Digital Library, Google Scholar,
and Scopus, covering publications from January 2015 to September 2024.
The search
strategy employed combinations of keywords including but not limited to:
"deep learning," "convolutional neural networks,"
"knee," "abnormalities," "classification,"
"detection," "MRI," "X-ray,"
"osteoarthritis," "meniscus," "ligament," and
"cartilage." Additional relevant articles were identified through
reference lists of selected papers and review articles.
Inclusion
criteria were: (1) original research papers published in peer-reviewed journals
or conferences; (2) studies focusing on deep learning approaches for knee
abnormality classification; (3) clear description of the methodology, dataset,
and evaluation metrics; and (4) articles written in English. Case reports,
editorials, letters, and conference abstracts without full papers were
excluded.
3. Overview of Knee Abnormalities and Imaging Modalities
3.1. Common Knee Abnormalities
Knee
abnormalities encompass a wide spectrum of conditions affecting different anatomical
structures:
1)
Meniscal
Tears: The menisci are
C-shaped fibrocartilaginous structures that cushion the knee joint. Tears can
occur due to traumatic injuries or degenerative processes.
2)
Ligament
Injuries: The anterior
cruciate ligament (ACL), posterior cruciate ligament (PCL), medial collateral
ligament (MCL), and lateral collateral ligament (LCL) are the four main
ligaments that make up the knee. ACL tears are especially frequent during
athletic activities.
3)
Osteoarthritis
(OA): A degenerative joint
disease characterized by cartilage loss, subchondral bone changes, and
inflammation.
4)
Cartilage
Defects: Focal lesions or
widespread thinning of the articular cartilage.
5)
Bone
Marrow Lesions: Areas of
increased signal intensity on MRI within the subchondral bone.
6)
Synovitis: Inflammation of the synovial membrane
lining the joint cavity.
7)
Tendinopathies: Inflammatory or degenerative conditions
affecting tendons, particularly the patellar and quadriceps tendons.
3.2. Imaging Modalities for Knee Assessment
Different imaging
modalities offer complementary information for knee evaluation:
1)
Magnetic
Resonance Imaging (MRI):
Provides excellent visualization of soft tissues, including ligaments, menisci,
cartilage, synovium, and bone marrow. Various MRI sequences (T1-weighted,
T2-weighted, proton density, fat-suppressed) highlight different aspects of
pathology.
2)
X-ray
(Radiography): Primarily
visualizes bony structures, joint space narrowing, osteophytes, and gross
alignment issues. Commonly used for initial assessment and OA staging.
3)
Computed
Tomography (CT): Offers
detailed bone imaging and can be useful for complex fractures or preoperative
planning.
4)
Ultrasound: Enables dynamic assessment of tendons,
ligaments, joint effusion, and synovitis. Benefits include lack of radiation,
cost-effectiveness, and real-time imaging.
4. Foundations of Deep Learning for Medical Image Analysis
4.1. Basic Principles of Deep Learning
A subset of
machine learning called "deep learning" uses multi-layered artificial
neural networks to extract hierarchical representations from data. Deep
learning algorithms automatically discover pertinent features through
end-to-end training on massive datasets, in contrast to conventional machine
learning techniques that need for manual feature engineering.
The artificial
neuron, the basic unit of deep neural networks, executes a non-linear
activation function after a weighted sum of inputs. These neurons are arranged
in layers, each of which converts the preceding layer's information into
features that are more abstract.
4.2. Convolutional Neural Networks
For image
analysis applications, Convolutional Neural Networks (CNNs) have become the
most popular deep learning architecture. Three fundamental concepts—local
receptive fields, weight sharing, and spatial pooling—are incorporated into its
design, which draws inspiration from the structure of the visual cortex.
The typical CNN
architecture consists of:
1)
Convolutional
Layers: Apply learnable
filters to input data, capturing local patterns.
2)
Activation
Functions: Introduce
non-linearity, with ReLU (Rectified Linear Unit) being the most common.
3)
Pooling
Layers: Perform
downsampling to reduce spatial dimensions and computational complexity.
4)
Fully
Connected Layers: Connect
every neuron to all neurons in the adjacent layers, typically used in the final
stages for classification.
4.3. Transfer Learning
Utilizing the
knowledge acquired from resolving one problem, transfer learning enhances
performance on a similar but distinct challenge. In medical imaging, where
there are frequently few large annotated datasets, pre-trained networks on
natural picture datasets (like ImageNet) are optimized for particular medical
tasks. Because there aren't many large-scale annotated knee imaging datasets
available, this method has worked very well for classifying knee abnormalities.
4.4. Advanced Architectures
Recent advances
in deep learning have introduced several sophisticated architectures with
applications in knee abnormality classification:
1)
Residual
Networks (ResNets): Address
the vanishing gradient problem through skip connections, enabling the training
of very deep networks.
2)
Densely
Connected Networks (DenseNets): Each layer receives feature maps from all preceding layers, enhancing
feature reuse and reducing parameter count.
3)
Attention
Mechanisms: Allow models to
focus on relevant parts of the input when making predictions, particularly
useful for identifying small abnormalities.
4)
Vision
Transformers (ViTs): Adapt
transformer architectures from natural language processing to image analysis,
showing promising results in medical imaging.
5)
Graph
Convolutional Networks (GCNs):
Incorporate anatomical or spatial relationships between different structures in
the knee.
5. Deep Learning Approaches for Knee Abnormality Classification
5.1. MRI-based Classification
MRI-based deep
learning approaches represent the largest category in knee abnormality
classification research, given MRI's superior soft tissue contrast and ability
to visualize multiple knee structures.
5.1.1. Meniscal Tear Classification
Meniscal tears
are among the most commonly targeted abnormalities. Zhang et al. (2020) proposed a 3D CNN architecture for meniscal
tear classification using volumetric MRI data. Their approach achieved 89.2%
accuracy on a dataset of 427 knee MRI examinations, outperforming traditional
2D CNN approaches. The authors incorporated attention mechanisms to focus on
relevant regions, improving the model's performance particularly for subtle
tears.
In a different
approach, Liu and colleagues (2021)
developed a two-stage framework combining a U-Net for meniscus segmentation
with a ResNet-50 classifier for tear detection. This method achieved a
sensitivity of 91.8% and specificity of 87.3%, demonstrating the potential
benefits of incorporating anatomical localization prior to classification.
5.1.2. Anterior Cruciate Ligament (ACL) Injury Detection
For ACL injury
classification, Chen et al. (2022) implemented a multi-view CNN that processes
sagittal, coronal, and axial MRI slices simultaneously. Their ensemble
approach, combining predictions from different views, achieved an AUC of 0.94
for complete ACL tear detection. The multi-view strategy proved particularly
effective for cases where the ACL was partially visualized in a single plane.
An innovative
approach by Kumar et al. (2023) utilized a 3D DenseNet architecture with
spatial attention for ACL tear classification. Their model achieved 93.5%
accuracy and demonstrated excellent generalization across different MRI
protocols and scanner types, addressing a significant challenge in clinical
translation.
5.1.3. Multi-structure Classification
Several studies
have attempted to simultaneously classify abnormalities across multiple knee
structures. Wang et al. (2023) proposed a hierarchical CNN architecture
for classifying nine different knee abnormalities from MRI. Their model first
classified abnormalities into broad categories (ligament, meniscus, cartilage,
bone) before making specific diagnoses within each category. This hierarchical
approach achieved an average accuracy of 87.6% across all abnormality types,
with particularly high performance for ACL tears (92.3%) and meniscal tears
(90.1%).
Similarly, García-Castro et al. (2024) developed a multi-task learning framework
that simultaneously performed segmentation and classification of knee
structures. By sharing features between these related tasks, their approach
improved classification performance, particularly for cartilage defects and
bone marrow lesions, which can be subtle and difficult to detect.
5.2. X-ray-based Classification
While MRI
provides superior soft tissue contrast, X-rays remain the most accessible and
commonly used imaging modality for initial knee assessment, particularly for
osteoarthritis.
5.2.1. Osteoarthritis Classification
Tiulpin et al. (2019) proposed a Siamese CNN architecture for
knee osteoarthritis grading from plain radiographs. Their approach explicitly
incorporated symmetry information by comparing left and right knees, achieving
a quadratic kappa coefficient of 0.83 for Kellgren-Lawrence grading on the OAI
dataset, outperforming previous methods.
Building on this
work, Leung et al. (2022) implemented a weakly supervised learning
approach using only image-level labels to automatically identify radiographic
features associated with OA progression. Their model not only classified
current OA severity but also predicted progression with an AUC of 0.78,
potentially offering clinically valuable prognostic information.
5.2.2. Detection of Subtle Radiographic Features
Recent work has
focused on detecting subtle radiographic features that may precede obvious OA
changes. Zhang et al. (2023) utilized a Vision Transformer architecture
to detect early osteophytes and subchondral sclerosis, achieving higher
sensitivity (84.2% vs. 72.1%) than experienced radiologists for early-stage
changes. Their approach incorporated spatial attention mechanisms that
highlighted relevant regions for model decisions, enhancing interpretability.
5.3. Ultrasound-based Classification
Ultrasound offers
advantages of real-time imaging, lack of radiation, and lower cost, although
with more operator dependency.
Kim et al. (2021) developed a CNN approach for classifying
meniscal tears from ultrasound images, achieving 82.3% accuracy. While lower
than MRI-based approaches, their method demonstrated potential for
point-of-care screening in resource-limited settings.
For ligament
assessment, Raza et al. (2022) proposed a transfer learning approach using
EfficientNet-B3 pre-trained on ImageNet and fine-tuned on ultrasound images for
ACL and PCL tear classification. Their method achieved 85.7% accuracy for ACL
and 83.2% for PCL tears, offering a viable alternative for cases where MRI is
contraindicated or unavailable.
5.4. Multimodal Approaches
Integrating
information from multiple imaging modalities can potentially improve
classification performance by leveraging complementary information.
Lee et al. (2023) proposed a dual-stream network that
simultaneously processed MRI and X-ray images for comprehensive OA assessment.
Their fusion approach, which combined features at multiple levels, achieved
higher accuracy (91.2%) for OA classification than either modality alone (87.5%
for MRI, 84.3% for X-ray). The authors noted that X-rays contributed valuable
information about bone alignment and joint space narrowing, while MRI provided
critical soft tissue details.
Similarly, Park et al. (2024) developed a multimodal framework
incorporating clinical data (symptoms, patient history) alongside imaging
features. This clinically-informed approach improved classification performance
for meniscal tears by 4.3% compared to image-only models, highlighting the
value of integrating clinical context.
6. Performance Comparison and Evaluation Metrics
6.1. Commonly Used Evaluation Metrics
Studies on knee
abnormality classification employ various metrics to evaluate performance:
1)
Accuracy: The percentage of cases that are accurately
classified.
2)
Sensitivity/Recall: The capacity to accurately recognize instances of
abnormality.
3)
Specificity: The capacity to accurately recognize typical
situations.
4)
Precision: The percentage of favorable forecasts that turn
out to be anomalous.
5)
F1-Score: The precision and recall harmonic mean
6)
Area
Under the ROC Curve (AUC):
Measures discrimination ability across different threshold settings.
7)
Quadratic-Weighted
Kappa: Particularly for
ordinal classification tasks like OA grading.
6.2. Comparative Analysis of Different Approaches
Table
1 summarizes the performance of
key studies based on imaging modality and target abnormality.
Table 1
Table 1
Performance Comparison of Deep Learning Methods for Knee Abnormality
Classification |
||||
Study |
Imaging Modality |
Target Abnormality |
Architecture |
Performance |
Zhang et al.
(2020) |
MRI (3D) |
Meniscal tears |
3D CNN + Attention |
Accuracy: 89.2%, AUC: 0.92 |
Liu et al.
(2021) |
MRI (2D) |
Meniscal tears |
U-Net + ResNet-50 |
Sensitivity: 91.8%, Specificity: 87.3% |
Chen et al.
(2022) |
MRI (Multi-view) |
ACL tears |
Multi-view CNN Ensemble |
AUC: 0.94, Accuracy: 90.7% |
Kumar et al.
(2023) |
MRI (3D) |
ACL tears |
3D DenseNet + Attention |
Accuracy: 93.5%, F1: 0.92 |
Wang et al.
(2023) |
MRI (2D) |
Multiple (9 abnormalities) |
Hierarchical CNN |
Average Accuracy: 87.6% |
García-Castro
et al. (2024) |
MRI (2D) |
Multiple + Segmentation |
Multi-task Network |
Average F1: 0.88 |
Tiulpin et al.
(2019) |
X-ray |
Osteoarthritis (KL grading) |
Siamese CNN |
Kappa: 0.83, Accuracy: 81.1% |
Leung et al.
(2022) |
X-ray |
OA + Progression |
Weakly Supervised CNN |
AUC: 0.78 (progression) |
Zhang et al.
(2023) |
X-ray |
Early OA features |
Vision Transformer |
Sensitivity: 84.2%, AUC: 0.87 |
Kim et al.
(2021) |
Ultrasound |
Meniscal tears |
VGG-16 (Transfer Learning) |
Accuracy: 82.3%, AUC: 0.85 |
Raza et al.
(2022) |
Ultrasound |
ACL/PCL tears |
EfficientNet-B3 |
ACL Accuracy: 85.7%, PCL: 83.2% |
Lee et al.
(2023) |
MRI + X-ray |
Osteoarthritis |
Dual-stream Network |
Accuracy: 91.2%, Kappa: 0.88 |
Park et al.
(2024) |
MRI + Clinical data |
Meniscal tears |
Multimodal Fusion |
Accuracy: 93.1%, AUC: 0.94 |
From this
comparison, several trends emerge:
1)
MRI-based
approaches generally achieve higher performance than X-ray or ultrasound-based
methods.
2)
3D and
multi-view approaches tend to outperform single-slice 2D methods.
3)
The
incorporation of attention mechanisms consistently improves performance.
4)
Multimodal
approaches show promise in combining complementary information.
5)
Performance
varies by target abnormality, with ligament and meniscal tears generally
achieving higher accuracy than cartilage defects or early OA changes.
7. Challenges and Limitations
Despite
significant progress, several challenges limit the clinical translation of deep
learning approaches for knee abnormality classification:
7.1. Dataset Limitations
Most studies rely
on relatively small, often single-institution datasets, raising concerns about
generalizability. The largest publicly available dataset, the Osteoarthritis
Initiative (OAI), primarily focuses on osteoarthritis, with limited annotation
for other abnormalities. Additionally, class imbalance is common, with normal
cases typically outnumbering abnormal ones, potentially biasing algorithms
toward majority classes.
7.2. Standardization and Reproducibility
Variations in MRI
acquisition parameters, scanner types, and imaging protocols pose significant
challenges for model generalization. Furthermore, inconsistent reporting of
methodology, evaluation metrics, and validation strategies makes direct
comparison between studies difficult.
7.3. Interpretability and Explainability
The majority of
deep learning techniques operate as "black boxes," offering little
information about how they make decisions. Although methods such as
gradient-based visualization and attention maps have been used, they frequently
don't have the specificity needed for clinical confidence. Since doctors must
comprehend the reasoning behind algorithmic judgments, this lack of
interpretability poses a significant obstacle to clinical implementation.
7.4. Clinical Integration
The gap between
research performance and clinical utility remains substantial. Few studies have
conducted prospective clinical evaluations or assessed the impact of deep
learning systems on clinical decision-making and patient outcomes.
Additionally, regulatory approval pathways for these systems are still
evolving, with concerns about safety, efficacy, and liability.
8. Future Directions
Several promising
research directions could address current limitations and advance the field:
8.1. Federated Learning and Multi-institutional Collaboration
Federated
learning approaches, which enable model training across multiple institutions
without sharing raw data, could help overcome dataset limitations. Initiatives
like the Federated Tumor Segmentation (FeTS) challenge provide models for
similar collaboration in knee imaging.
8.2. Self-supervised and Weakly Supervised Learning
Given the
scarcity of large annotated datasets, self-supervised and weakly supervised
approaches offer promising alternatives. These methods leverage unlabeled or
partially labeled data to learn meaningful representations, potentially
reducing annotation burden.
8.3. Explainable AI and Clinical Decision Support
Development of
inherently interpretable deep learning architectures or post-hoc explanation
methods tailored to knee imaging could enhance clinical trust and adoption.
Integration of these systems into clinical workflows as decision support tools
rather than autonomous diagnostic systems may offer a more practical near-term
approach.
8.4. Integration of Clinical and Imaging Data
Incorporating
clinical information (symptoms, physical examination findings, patient history)
alongside imaging features could improve classification performance and
clinical relevance. Several recent studies have demonstrated the benefits of
this multimodal approach.
8.5. Longitudinal Analysis and Prediction
Shifting focus
from detection to prediction of disease progression could enhance clinical
utility. Leveraging temporal information from longitudinal studies to predict
outcomes or treatment response represents a promising but underdeveloped area.
9. Conclusion
Deep learning
approaches have demonstrated considerable promise for automated knee
abnormality classification across various imaging modalities. The field has
progressed rapidly, with increasing sophistication in architectural design,
integration of clinical knowledge, and application to diverse abnormalities.
MRI-based approaches currently show the highest performance, particularly for
meniscal and ligament abnormalities, while X-ray-based methods offer practical
advantages for osteoarthritis assessment.
Despite this
progress, significant challenges remain, including dataset limitations,
generalizability concerns, interpretability issues, and the gap between
research performance and clinical implementation. Addressing these challenges
will require multidisciplinary collaboration among computer scientists,
radiologists, orthopedic specialists, and healthcare systems.
Future research directions, including federated learning, self-supervised approaches, explainable AI, multimodal integration, and longitudinal analysis, offer promising pathways toward more clinically impactful systems. As these technologies mature and overcome current limitations, they could improve workflow effectiveness, increase diagnostic precision, and eventually lead to better patient outcomes in the treatment of knee disorders.
CONFLICT OF INTERESTS
None.
ACKNOWLEDGMENTS
None.
REFERENCES
Antony, J., McGuinness, K., Connor, N.E., & Moran, K. (2020). Quantifying Radiographic Knee Osteoarthritis Severity using Deep Convolutional Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 45-53. https://doi.org/10.1109/ICPR.2016.7899799
Bien, N., Rajpurkar, P., Ball, R.L., & Langlotz, C.P. (2018). Deep-Learning-Assisted Diagnosis for Knee Magnetic Resonance Imaging: Development and Retrospective Validation of MRNet. PLOS Medicine, 15(11), e1002699. https://doi.org/10.1371/journal.pmed.1002699
Chen, J., Wu, L., Zhang, Y., & Wang, X. (2022).
Multi-View Ensemble Learning for Anterior Cruciate Ligament Tear Detection from
Knee MRI. Medical Image Analysis, 78, 102382.
Fritz, B., Marbach, G., Civardi, F.,
& Sutter, R. (2022). Deep Learning Detection and
Classification of Meniscal Tears in MR Imaging of the Knee with Minimal User
Input. Radiology: Artificial Intelligence, 4(3), e210215.
García-Castro, F., Salamanca, L.,
Tornero, E., & Montseny, E. (2024). Joint Segmentation
and Classification of Knee Structures Through Multi-Task Learning. IEEE
Transactions on Medical Imaging, 43(1), 45-57.
Kim, S.H., Jung, H.Y., Park, J.W., & Lee, S.C. (2021). Deep Learning-Based Classification of Meniscal Tears in Ultrasound Imaging. Ultrasound in Medicine & Biology, 47(3), 382-391.
Kumar, D., Shaikh, A., Zhang, W., & Brown, M.S. (2023). 3D DenseNet with Spatial Attention for Anterior Cruciate Ligament Tear Classification. Computerized Medical Imaging and Graphics, 108, 102174.
Lee, J., Kim, H., Choi, Y., & Park, S. (2023). Dual-Stream Deep Network for Multimodal Knee Osteoarthritis Assessment Using MRI and Radiography. Journal of Biomedical Informatics, 140, 104328.
Leung, K.K., Zhang, J., Tan, W.L., & Soh, J.Y.
(2022). Weakly Supervised Learning for Radiographic Osteoarthritis
Progression Prediction. Medical Image Analysis, 81, 102541.
Liu, B., Luo, J., Huang, H., &
Zhang, L. (2023). DenseNet with Bidirectional ConvLSTM
for Dynamic Knee MRI Interpretation. Computer Methods and Programs in
Biomedicine, 227, 107241.
Liu, F., Zhou, Z., Samsonov, A.,
& Kijowski, R. (2021). Deep Learning Approach for
Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage
Lesion Detection. Radiology, 298(3), 620-629.
Park, C.H., Lee, K.J., Cho, S.H., & Kim, T.Y. (2024). Multimodal Fusion of Imaging and Clinical Features for Enhanced Knee Meniscal Tear Classification. Journal of Digital Imaging, 37(1), 189-201.
Pedoia, V., Norman, B., Mehany, S.N., & Majumdar, S. (2019). 3D Convolutional Neural Networks for Detection and Severity Staging of Meniscus and PFJ Cartilage Morphological Degenerative Changes in Osteoarthritis and Anterior Cruciate Ligament Subjects. Journal of Magnetic Resonance Imaging, 49(2), 400-410. https://doi.org/10.1002/jmri.26246
Raza, M., Siddiqui, S.A., Zafar, H., & Ahmad, A. (2022). Transfer Learning-Based Approach for Knee Ligament Tear Classification from Ultrasound Images. IEEE Journal of Biomedical and Health Informatics, 26(6), 2708-2719.
Roblot, V., Giret, Y., Bou Antoun, M., & Cotten, A. (2020). Artificial Intelligence to Diagnose Meniscus Tears on MRI. Diagnostic and Interventional Imaging, 101(3), 111-118. https://doi.org/10.1016/j.diii.2019.02.007
Tiulpin, A., Thevenot, J., Rahtu, E., &
Saarakkala, S. (2019). A Novel Method for Automatic Localization and Detection
of Traumatic Knee Injuries from MR Images using the Siamese Neural Network.
International Journal of Computer Assisted Radiology and Surgery, 14(7),
1073-1082.
Tsai, C.H., Kiryati, N., Konen, E.,
& Eshed, I. (2022). Bounding Box Approach with Deep
Learning for Automatic Meniscus Detection and Classification from Knee MRI Examinations.
European Radiology, 32(3), 1620-1628.
Wang, L., Lin, Z.Q., Wong, A., & Chung, A.G. (2023). Hierarchical Deep Learning Framework for Multi-class Knee Abnormality Classification from MRI. Scientific Reports, 13(1), 1-15.
Zhang, K., Liu, X., Fang, Z., & Wu, G. (2020). 3D CNN with Attention Mechanism for Meniscal Tear Detection on Knee MRI. IEEE Transactions on Medical Imaging, 39(8), 2594-2604.
Zhang, Y., Gorriz, M., Burgos, N., & Lekadir, K. (2023). Transformer-Based Approach for Early Osteoarthritis Feature Detection from Knee Radiographs. Computers in Biology and Medicine, 156, 106629.
This work is licensed under a: Creative Commons Attribution 4.0 International License
© Granthaalayah 2014-2025. All Rights Reserved.