Abstract
Cancer is the leading disease in the world in terms of increasing numbers of new patients and deaths annually, making it one of the most formidable diseases today. Lung cancer and breast cancer are widely acknowledged as two of the most prevalent forms of cancer, both falling under the carcinoma subtype. Early detection of these cancers is particularly crucial for patient outcomes. With cancer being a longstanding health challenge, comprehensive datasets containing essential information for cancer prediction and diagnosis are now available. Radiologists primarily rely on Computed Tomography (CT) scans for cancer diagnosis. However, the escalating demand for CT scans and their interpretation has led to radiologists prioritizing throughput over the quality of readings. To address this challenge, Computer-Aided Diagnosis (CAD) systems have been introduced into medical practice. This research paper delves into evaluating the accuracy of the implemented U-Net algorithm for segmentation and classification on two prominent medical datasets: Curated Breast Imaging Subset of DDSM (CBIS-DDSM) and Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The primary objective of the study is to assess the algorithm’s performance in accurately identifying and classifying Regions of Interest (ROIs) in breast and lung medical images.
Keywords
- U-Net
- segmentation
- classification
- breast images
- lung images
- CBIS-DDSM
- LIDC-IDRI
- medical dataset
- deep learning
- algorithm evaluation
1. Introduction
Early detection is key when it comes to survival in cases of lung, breast, or any other type of cancer. Cancer has been around for a long time, and unfortunately, the number of cases has only grown. Fifty years ago, diagnosing and predicting cancer relied heavily on one medical expert running tests like mammograms, ultrasounds, and MRIs. As explained in Ref. [1], an ultrasound works by sending sound waves through the skin to check what is happening inside the body. A device called a transducer is placed on the skin, and it sends out sound waves that bounce off tissues, creating echoes. These echoes are turned into grayscale images using binary values. The same paper [1] also discusses PET scans, which are often used to spot early signs of cancer, heart disease, or brain disorders by injecting radioactive tracers that help highlight diseased cells. For example, injecting F-fluorodeoxyglucose allows doctors to see exactly where a tumor is located. Another tool, dynamic MRI, is used to examine blood vessels and is often linked to detecting metastasis, while mammography uses X-rays to get a better look at tissue.
As the years passed, diagnosing cancer became more complex. Doctors now have to sift through a wide range of tests and variables to figure out if cancer is benign or malignant, all while trying to avoid personal bias. Since the human mind is not perfect and can lack objectivity, the need for automation became clear. For instance, in mammography, doctors often have to analyze a large volume of images, which can lead to errors.
In recent years, a lot of research has focused on how machine learning can assist in cancer diagnosis and prediction. The reason this has gained attention is that machine learning, a branch of artificial intelligence, uses algorithms to sift through massive datasets, “learn” from the data and produce results. We’ll explore different types of machine learning later, but overall, it is clear that machine learning is a major step forward in diagnostic medicine. In cancer prognosis, where there is often so much data to analyze, it can help reduce human error and speed up diagnoses. The aim is not to replace doctors but to support them, making the diagnostic process quicker and more accurate.
In this study, our primary objective is to explore the performance and accuracy of the U-Net architecture on a model designed specifically for car image segmentation when it is used on a lung and breast cancer dataset. The algorithm used in this paper was created on the U-Net architecture for the Carvana dataset and is used for car/vehicle segmentation on images. Here we want to see if it is possible to reuse a solution for a different problem—lung cancer segmentation—and see how this reused solution performs in comparison to algorithms specifically developed for lung cancer segmentation and classification, namely Pro-CAN, DeepLung, Local-Global and NASlung.
Computer-Aided Diagnosis (CAD) systems have emerged as powerful tools in the field of medical imaging, aiding healthcare professionals in the detection and diagnosis of various diseases, including cancer. These systems leverage advanced algorithms and machine learning techniques to analyze medical images, extract relevant features, and provide insights that aid in accurate and efficient diagnosis.
2. Related work
To provide a more comprehensive evaluation, it is crucial to compare the U-Net architecture with other recent state-of-the-art deep learning models that have shown potential in medical imaging. Architectures like ResNet, DenseNet, and Transformer-based models have achieved notable results in tasks such as image segmentation and classification. These models offer enhanced feature extraction capabilities and deeper representations of image data, which can improve accuracy and precision in medical applications. While U-Net remains a powerful tool, especially for segmentation tasks, exploring these alternative architectures would help highlight its strengths and limitations. This comparison can further illustrate why U-Net was chosen for this study, especially given its simplicity and effectiveness in cases requiring precise localization of features, such as breast and lung imagery.
In the previous studies on lung nodule classification using the algorithms described in this paper, the authors address the challenging task of classifying diverse lung nodules as benign or malignant based on their shapes and sizes. The proposed methodology incorporates Residual Blocks for extracting local features, known for their efficacy in CNNs designed for nodule classification. Additionally, the authors employ Non-Local neural networks, incorporating Self-Attention layers, to capture global features by analyzing the density and structure of the nodules. The Non-Local Blocks utilize matrix multiplication to create non-linear interactions between spatial features, and the Softmax function generates attention masks for selective feature retention. The studies compare the proposed Local-Global Network with various baseline methods, including Densenet121, Resnet50, Resnet18, BasicResnet, AllAtn, and AllAtnBig [2]. These baselines involve pretrained architectures and variations, allowing for a comprehensive analysis. The results demonstrate the effectiveness of the Local-Global Network, leveraging both local and global features for improved lung nodule classification. The inclusion of Non-Local Blocks proves to be crucial, as evidenced by comparisons with baseline methods, showcasing the significance of incorporating global feature extraction in enhancing the accuracy of malignancy predictions [3].
In addition to the methodology presented in this study, several other approaches have been proposed to tackle the complexities of lung nodule classification. In paper [4], a 3D Convolutional Neural Network (3D-CNN) that incorporates a hybrid loss function combining focal loss and Dice coefficient is introduced to address the imbalance between benign and malignant nodules in training datasets. This approach focuses on improving classification performance in datasets with high variability in nodule sizes and shapes, highlighting the importance of tailored loss functions in enhancing model robustness.
In addition to technical performance, it is critical to address the ethical considerations surrounding the use of AI in healthcare. One major concern is patient data privacy. Ensuring that AI models are developed and deployed while maintaining the confidentiality of sensitive health data is paramount. Additionally, biases in training datasets are another ethical challenge. If certain patient demographics are underrepresented, the model may not perform equally well across diverse populations, potentially leading to biased diagnoses. The integration of ethical frameworks that safeguard patient privacy and ensure the fairness of AI models is necessary for their widespread acceptance and effectiveness in clinical practice.
In paper [5], a Multi-scale Dense Convolutional Neural Network (MD-CNN) is proposed, integrating multi-scale feature extraction with dense connections to improve the classification of pulmonary nodules. This method captures both fine-grained and coarse-grained features, leading to better handling of nodules with complex textures. The study demonstrates that integrating multi-scale features significantly enhances classification accuracy, especially for small and irregular nodules. In paper [6], a Two-Stream Network architecture is developed that combines a standard CNN with a Transformer-based model to capture both local and contextual information in lung nodules. This approach leverages the strengths of CNNs in local feature extraction and Transformers in capturing long-range dependencies, leading to improved classification outcomes. This study underscores the potential of hybrid architectures in achieving state-of-the-art performance in medical image classification tasks. In paper [7], a deep learning approach specifically designed to address the variation in nodule shapes and textures is introduced. The model, based on a multi-crop pooling strategy, enhances the sensitivity of the network to nodule variations by allowing multiple views of the nodules during training. This technique improved the network’s ability to differentiate between benign and malignant nodules by focusing on the most informative regions within the images.
In paper [8], the effectiveness of multi-view CNNs for lung nodule classification is explored. By analyzing nodules from nine different views, this approach significantly improved the detection and classification of nodules with irregular shapes or those partially obscured by other lung structures. The multi-view strategy provided a more comprehensive analysis, allowing for better classification accuracy, particularly in complex cases where single-view analysis might fail. In paper [9], an end-to-end nodule detection and classification pipeline that utilizes a multi-task learning framework is proposed. By jointly learning the tasks of nodule detection and malignancy classification, the model achieved higher overall accuracy and robustness. The multi-task approach ensured that the features learned during nodule detection were directly applicable to classification, leading to a more cohesive and efficient model.
3. Research hypothesis
The central hypothesis in this research revolves around the belief that the algorithm initially created for the precise segmentation of vehicles within the Carvana dataset on the U-Net architecture, holds the potential to be a transformative force in the world of lung cancer detection. Our focal point is the intricate domain of CT scan images, where the shadows of lung cancer often linger, eluding early discovery. By harnessing the power of innovative image processing techniques and leveraging statistical metrics such as the F1 score, AUC, IoU, and Dice score, we anticipate identifying algorithms that exhibit superior performance in identifying lung nodules and segmenting them accurately. Furthermore, by developing a user-friendly application that enables individuals to access and interpret their CT scan results conveniently, we aim to bridge the gap between sophisticated algorithms and practical healthcare applications, improving patient outcomes and reducing the burden of lung cancer.
3.1 Possible results
H0: Accuracy of this U-Net implementation should be greater than 80%. Extremely High Precision in Nodule Detection: If our research aligns with this scenario, it signifies an alignment with our initial hypothesis. In this case, the U-Net algorithm may demonstrate exceptional precision in detecting nodules, heralding a change in basic assumptions in medical imaging.
H1: Predictive capabilities with cautions—Algorithm usage with advisory or limited nodule detection: In this outcome, our research suggests that the algorithm possesses predictive capabilities, albeit with some nuances. While it may not achieve the unparalleled precision anticipated in H0, it could still serve as a valuable tool for preliminary nodule identification. This outcome underscores the importance of nuanced interpretation and collaboration between artificial intelligence tools and healthcare professionals.
4. Datasets
4.1 LIDC/IDRI
The Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) dataset stands as the most popular dataset in lung cancer detection and research. It combines a diverse collection of Computed Tomography (CT) scans, and this dataset has been important in advancing the development of machine learning algorithms and Computer-Aided Diagnosis systems for the early detection of lung cancer. With its unique combination of CT images and expert annotations, the LIDC/IDRI dataset offers a resource for scientists and medical professionals striving to improve the accuracy and efficacy of lung cancer diagnosis. LIDC-IDRI contains 1018 low-dose lung CTs from 1010 lung patients. The LIDC/IDRI dataset contains a variety of data relevant to lung cancer detection. It primarily consists of high-resolution CT scans, capturing the intricate details of the lung structures. These scans are obtained from a variety of patients, contributing to the diversity of cases encountered, from benign to malignant lung nodules. In total, the dataset contains thousands of CT scans, making it a robust resource for algorithm training and evaluation. An important feature of the LIDC/IDRI dataset is the annotation process it undergoes. Expert radiologists review each CT scan, marking and annotating lung nodules and other abnormalities. These annotations often involve a consensus among multiple radiologists, ensuring the reliability of the ground truth data. This consensus-based approach contributes to the dataset’s credibility, making it an invaluable tool for algorithm development. Despite its strengths, the LIDC/IDRI dataset presents different challenges and variabilities. Lung nodules come in diverse shapes, sizes, and densities, reflecting the complexity of real-world clinical cases. This diversity is a double-edged sword, providing many opportunities for algorithm development but also demanding sophisticated solutions to manage this variability. Researchers must address these challenges to create algorithms that can generalize effectively across different nodule characteristics. One of the most significant advantages of the LIDC/IDRI dataset is its public availability. This accessibility has encouraged a wide range of research studies, resulting in numerous publications and innovations in the field of lung cancer detection. Researchers worldwide have leveraged this dataset to develop and benchmark machine learning models, contributing to advancements in early diagnosis and the overall management of lung cancer. Figure 1 shows an example of nodule segmentation on the LIDC/IDRI dataset.

Figure 1.
Segmentation of the LIDC/IDRI dataset [10].
4.2 CBIS/DDSM
The Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) dataset is one of the most widely utilized datasets in breast cancer detection and research. This dataset comprises a comprehensive collection of mammographic images, playing a crucial role in advancing the development of machine learning algorithms and Computer-Aided Diagnosis systems aimed at early breast cancer detection. With its extensive collection of annotated mammograms, the CBIS-DDSM dataset serves as a valuable resource for researchers and medical professionals working to enhance the accuracy and effectiveness of breast cancer diagnosis.
CBIS-DDSM contains a large number of mammogram images that are carefully curated to support the training and evaluation of machine learning models. This dataset is a subset of the original DDSM dataset, which was developed to provide researchers with a high-quality resource for mammographic image analysis. The images in CBIS-DDSM are captured using various mammography systems, ensuring diversity in image quality and presentation. This diversity is crucial for developing robust algorithms capable of generalizing across different imaging conditions and patient demographics.
One of the key features of the CBIS-DDSM dataset is the detailed annotation provided by expert radiologists. Each mammogram in the dataset is meticulously labeled, with annotations that include the location, size, and type of lesions, such as masses and calcifications. These annotations often come with corresponding BI-RADS (Breast Imaging-Reporting and Data System) scores, which provide standardized information on the likelihood of malignancy. The careful annotation process, often involving multiple expert reviews, ensures the reliability and accuracy of the data, making CBIS-DDSM an invaluable tool for developing and testing breast cancer detection algorithms.
The CBIS-DDSM dataset also presents various challenges and complexities, reflecting the real-world scenarios encountered in clinical practice. The mammograms in the dataset include a wide range of breast tissue densities, from fatty to extremely dense, which can affect the visibility of lesions. Additionally, the dataset includes both benign and malignant findings, adding to the complexity of developing algorithms that can accurately differentiate between the two. This variability in tissue density and lesion characteristics requires sophisticated image processing and machine learning techniques to ensure accurate detection and classification.
Despite these challenges, the CBIS-DDSM dataset is a powerful resource for the research community, particularly because of its public availability. The accessibility of this dataset has enabled a multitude of research studies, leading to significant advancements in the field of breast cancer detection. Researchers worldwide have utilized CBIS-DDSM to develop and benchmark machine learning models, contributing to improvements in early diagnosis and the overall management of breast cancer. The dataset’s widespread use in the scientific community has resulted in numerous publications and innovations, further solidifying its role as a cornerstone in breast cancer research.
5. Data preprocessing
In this study, a comprehensive preprocessing pipeline was set up to prepare the datasets for lung and breast cancer detection. This process involved several key steps to ensure the data was consistent and ready for training machine learning models, especially for tasks like segmentation and classification. The preprocessing for lung cancer data started with isolating the lung regions from the CT scans in the LIDC-IDRI dataset. The original DICOM images were converted into NumPy arrays, making them easier to work with. Using the metadata associated with each scan, we created binary masks to segment the lung nodules from the lung regions, which was crucial for focusing on the areas where cancer might be present. Alongside the segmentation masks, we also extracted diagnostic information—whether each nodule was malignant (cancerous) or benign (non-cancerous). This information was used to create binary labels for classification, with “true” indicating cancerous nodules and “false” for non-cancerous ones. A new metadata file was then generated to store this information, and a fresh dataset was created that included the processed images, segmentation masks, and classification labels. To keep things consistent with the CBIS-DDSM dataset (which does not include cases without masses), we excluded any “clean” cases from our lung cancer dataset—meaning patients who had no lung nodules were left out. By focusing only on cases with nodules, we ensured that our dataset was suited for binary classification, which was necessary for the models to work properly.
For the CBIS-DDSM dataset, the preprocessing required similar steps. We started by creating a CSV file that mapped the file paths of the images and their corresponding masks. The dataset contained a mix of mammogram images, pre-generated binary masks, and DICOM images that only showed nodules. We carefully identified and categorized these files to make sure that the images and masks were paired correctly. Once everything was organized, we created a new dataset that matched the structure of our lung cancer dataset. During this process, we segmented the breast regions from the mammograms to remove any background noise. The mammograms were labeled with MLO and CC, which refer to different views or angles, and L and R, indicating the left or right breast. We kept these labels intact to preserve important information. Like with the lung cancer dataset, the images were converted into NumPy arrays for easier processing. We only included cases that had masses, leaving out those with calcifications, to keep things consistent with our lung cancer data. After creating both datasets, we normalized the images to ensure that the pixel values (which represent different colors or grayscale levels) were within the same range across all images. This step was important to improve the accuracy of the U-Net algorithm used for segmentation, as consistent input data leads to better model performance.
We also checked the CBIS-DDSM images for size consistency. Unlike the lung cancer images, not all of the breast cancer images were the same size. To fix this, we resized all images to a uniform dimension. When resizing, we added a black background to keep the original shapes of the lungs, breasts, and nodules intact. Images that were smaller than the target size were scaled up, with the original image centered and the surrounding area filled with black, so we could maintain the accuracy of nodule segmentation. To segment the lung and breast regions, we used simple morphological operations to generate Region of Interest (ROI) masks. These masks focused the analysis on the key areas where nodules might be found. By zeroing in on these specific regions, we helped the segmentation algorithms perform more accurately, making it easier for the models to identify and classify the nodules.
6. U-Net architecture
The U-Net architecture is composed of two main parts: a contracting path on the left and an expansive path on the right. The contracting path follows the structure of a typical convolutional network, where two 3 × 3 convolutions (without padding) are applied repeatedly, each followed by a rectified linear unit (ReLU). After each convolution, a 2 × 2 max-pooling operation with a stride of two is used to downsample the image. With each downsampling step, the number of feature channels is doubled. On the expansive path, the process is reversed. Each step starts with upsampling the feature map, followed by a 2 × 2 convolution (also known as “up-convolution”) that reduces the number of feature channels by half. The upsampled feature map is then concatenated with the corresponding cropped feature map from the contracting path. This is followed by two 3 × 3 convolutions, each accompanied by a ReLU activation. Cropping is necessary at this stage to account for the loss of border pixels that occurs during each convolution. At the final layer, a 1 × 1 convolution is applied to convert the 64-component feature vectors into the desired number of output classes. Altogether, the network contains 23 convolutional layers. To ensure that the output segmentation map can be tiled correctly, the input tile size must be chosen so that every 2 × 2 max-pooling operation is performed on a layer with even x and y dimensions. The architecture of the U-Net that is going to be used in this paper is shown inFigure 2.

Figure 2.
U-Net architecture.
7. Implementation
In this phase, we combined and processed the datasets to prepare them for training and testing the model. The approach involved selecting random samples from both datasets and organizing the data for model input.
We began by loading both the LIDC-IDRI and CBIS-DDSM datasets. From each dataset, we randomly selected 500 patient IDs. The LIDC-IDRI dataset includes multiple images per patient, so for each selected patient, we randomly chose one image. In contrast, the CBIS-DDSM dataset has only one image per patient, so each patient’s single image was used. This selection process resulted in a total of 1000 random images—500 from each dataset. Each image was paired with its corresponding mask (for segmentation) and label (for classification, where labels are binary: 0 for benign and 1 for malignant). This method ensured equal representation of both lung and breast cancer cases in the combined dataset. After selecting the images, we organized them into three arrays: images, masks, and labels. Each array was aligned so that the corresponding mask and label for an image were at the same index in their respective arrays. This setup allowed us to maintain the relationship between each image, its mask, and its classification label. To avoid any bias and ensure that the data was properly randomized, all three arrays—images, masks, and labels—were shuffled using the same randomization process. This ensured that the consistency between images, masks, and labels was preserved even after shuffling. Once the data was randomized, we divided the arrays into training, validation, and test sets. The training set was used to train the model, while the validation set helped fine-tune hyperparameters and prevent overfitting. The test set was used to assess the final model’s performance.
7.1 Model training and evaluation
The combined, normalized datasets were then fed into a U-Net model, designed for image segmentation and classification. The U-Net model, known for its effectiveness in medical image segmentation, was particularly suitable for this task. It was configured with mechanisms to prevent overfitting, such as dropout layers, and checkpoints to save the best-performing model during training. During the training process, the model learned to predict binary masks for segmentation (indicating the presence of nodules or masses) and to classify the images as benign or malignant. After training, the model was evaluated on the test set, where it returned the predicted binary masks and classification results, demonstrating its ability to accurately segment and classify both lung and breast cancer cases.
8. Results and discussion
The U-Net model was tested on a combined dataset of lung and breast cancer images, and the results were mixed, highlighting both the potential and the challenges of this approach.
8.1 Segmentation performance
The model’s ability to accurately segment cancerous regions was far from ideal. Here is a closer look at the metrics:
Segmentation accuracy: The model achieved a segmentation accuracy of 0.0, which is as low as it gets. This suggests that the model struggled to correctly identify and segment the cancerous areas within the images, missing the mark completely.
Mean IoU and dice coefficient: The mean IoU and Dice scores, which measure how well the predicted segmentations overlap with the actual nodule regions, were also extremely low. These scores indicate that the model was not able to effectively distinguish between cancerous nodules and other structures in the images. It seems that the U-Net model got confused by the varying shapes and densities in the lung and breast images, leading to inaccurate segmentations.
8.2 Classification performance
On the classification side, the model performed somewhat better, though it still has room for improvement:
Classification accuracy: The model correctly classified about 68.5% of the cases. While this is better than random guessing, it is not reliable enough for clinical use.
Confusion matrix: The confusion matrix shows that the model correctly identified 133 malignant cases, but it also mistakenly labeled 67 benign cases as malignant. This high number of false positives means that the model might cause unnecessary concern by flagging non-cancerous cases as cancer.
ROC-AUC score: The ROC-AUC score of 0.685 indicates that the model has moderate ability to differentiate between benign and malignant cases. It is decent, but not quite at the level needed for confident predictions in a clinical setting.
F1 score: The F1 score, which balances precision and recall, was 0.7987. While this suggests the model is reasonably balanced, the high false positive rate still points to significant issues, particularly in distinguishing between benign and malignant cases.
The results are displayed in the Figure 3.

Figure 3.
Model implementation results.
These results highlight the difficulties in using a single U-Net model to handle both lung and breast cancer images. The model’s poor segmentation performance suggests that it has trouble generalizing across the very different image types—lung CT scans and breast mammograms—each with its own set of complexities. The model seems to misinterpret various structures as nodules or masses, leading to inaccurate segmentations.
On the classification front, while the model does better, it still struggles with a high rate of false positives. This suggests that the model, trained on a mixed dataset, might not be finely tuned to the specific features that differentiate benign from malignant cases in lung and breast images.
Overall, these findings suggest that expecting a single model to excel in segmenting and classifying both lung and breast cancer is overly ambitious. The model’s confusion between different types of nodules and masses could be addressed by developing specialized models for each type of cancer or by using more advanced techniques that can better handle the diversity in medical images.
Although U-Net has demonstrated great success in various imaging tasks, its performance can vary significantly depending on the imaging modality. For example, when applied to CT scans, U-Net may struggle to capture subtle differences in tissue density compared to mammography, where structural clarity is often higher. These limitations highlight the challenges of generalizing U-Net across different modalities, and its effectiveness can be reduced when tasked with differentiating between more complex anatomical structures. Understanding these limitations is essential for identifying areas where additional modifications or alternative architectures may be required to improve performance across a broader range of medical images.
If these challenges can be addressed, there is potential to integrate these models into web-based applications for early cancer detection, providing a useful tool for healthcare professionals and improving early diagnosis rates. However, as the current results show, there’s still significant work to be done before such a system can be considered reliable and effective. To further validate the findings, it would be beneficial to apply cross-validation using external datasets. This ensures that the U-Net model generalizes well beyond the CBIS-DDSM and LIDC-IDRI datasets, increasing confidence in its robustness across different imaging sources. Moreover, expanding the evaluation to include more diverse patient demographics would provide insight into the model’s ability to perform consistently across various population groups. Including such diverse datasets can also help address potential biases, ensuring that the model is applicable to real-world scenarios involving patients from various ethnicities, ages, and medical histories.
9. Conclusion
This study explored the use of a U-Net model for segmenting and classifying both lung and breast cancer from a combined dataset. The results show that while the model has some ability to classify cancerous cases, its segmentation performance was poor, particularly when trying to handle both types of cancer at once.
These results suggest that more targeted approaches are needed. Future research could focus on developing separate models for lung and breast cancer or incorporating advanced methods like multi-task learning to improve accuracy. Enhancing the preprocessing steps, using better data augmentation, or incorporating additional clinical data might also help.
References
- 1.
Islam MM, Haque MR, Iqbal H, Hasan MM, Hasan M, Kabir MN. Breast Cancer Prediction: A Comparative Study Using Machine Learning Techniques. Singapore: Springer Nature; 2020 - 2.
Jacobs C, Murphy K, Prokop M, van Ginneken B. Computer-aided detection of pulmonary nodules: A comparative study using the public LIDC/IDRI database. European Radiology (Heidelberg, Germany: Springer). 2016 - 3.
Al-Shabi M, Boon LL. Lung Nodule Classification Using Deep Local-Global Networks. Heidelberg, Germany: Springer; Sep 2020 - 4.
Hanliang J, Fuhao S, Fei G. Learning Efficient, Explainable, and Discriminative Representations for Pulmonary Nodule Classification. Arxiv, Cornell University; Jan 2021 - 5.
Al-Shabi M, Hwee KL. Gated-Dilated Networks for Lung Nodule. Arxiv, Cornell University; 2019 - 6.
Ray S. A quick review of machine learning algorithm. In: International Conference on Machine Learning, Big Data, Cloud and Parallel Computing. Sep 2019 - 7.
Al-Shabia M, Shaka K, Tana M. ProCAN: Progressive Growing Channel Attentive Non-Local Network for Lung Nodu Classification. Arxiv, Cornell University; 2020 - 8.
Al-Shabi M, Shak K. ProCAN: Progressive Growing Channel Attentive Non-Local Network for Lung Nodule Classification. Elsevier; 2021 - 9.
Wang S, Zhou M. A Multi-View Deep Convolutional Neural Network for Lung Nodule Segmentation. Publmed. 2022 - 10.
Nasrullah N, Sang J. Automated lung nodule detection and classification using deep learning combined with multiple strategies. Sensors. 2020