Deep learning; Dermatology; Differentiable rendering; Lesion detection; Lesion segmentation; Skin image analysis; Synthesis; Humans; Imaging, Three-Dimensional/methods; Deep Learning; Image Interpretation, Computer-Assisted/methods; Skin Diseases/diagnostic imaging; 2D images; Dermatological images; Image-analysis; Lesion segmentations; Lighting conditions; Skin image analyse; Skin images; Image Interpretation, Computer-Assisted; Imaging, Three-Dimensional; Skin Diseases; Radiological and Ultrasound Technology; Radiology, Nuclear Medicine and Imaging; Computer Vision and Pattern Recognition; Health Informatics; Computer Graphics and Computer-Aided Design
Abstract :
[en] In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.
Disciplines :
Computer science
Author, co-author :
Sinha, Ashish ; Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
Kawahara, Jeremy ; Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
Pakzad, Arezou ; Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
Abhishek, Kumar ; Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
RUTHVEN, Matthieu ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > CVI2 > Team Djamila AOUADA
GHORBEL, Enjie ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > CVI2 > Team Djamila AOUADA
KACEM, Anis ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
AOUADA, Djamila ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > CVI2
Hamarneh, Ghassan ; Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada. Electronic address: hamarneh@sfu.ca
External co-authors :
yes
Language :
English
Title :
DermSynth3D: Synthesis of in-the-wild annotated dermatology images.
Natural Sciences and Engineering Research Council of Canada Fonds National de la Recherche Luxembourg BC Cancer Foundation Nvidia Alliance de recherche numérique du Canada
Funding text :
This research was enabled in part by support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant RGPIN-2020-06752 , the BC Cancer Foundation - BrainCare BC Fund , Luxembourg National Research Fund (FNR) project BRIDGES2021/IS/16353350/FaKeDeTeR, and the computational resources provided by WestGrid (Cedar) , Digital Research Alliance of Canada , and NVIDIA Corporation . The authors are also grateful to Megan Andrews and Colin Li for their assistance with the data annotation efforts, which included the manual segmentations of non-skin regions in the texture images.
Abhishek, K., Hamarneh, G., Mask2Lesion: Mask-constrained adversarial skin lesion image synthesis. Simulation and Synthesis in Medical Imaging: 4th International Workshop, SASHIMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings, 2019, Springer, 71–80.
Abhishek, K., Hamarneh, G., Drew, M.S., 2020. Illumination-Based Transformations Improve Skin Lesion Segmentation in Dermoscopic Images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 728–729.
Abhishek, K., Kawahara, J., Hamarneh, G., Predicting the clinical management of skin lesions using deep learning. Sci. Rep. 11:1 (2021), 1–14.
Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J., Scape: shape completion and animation of people. ACM SIGGRAPH 2005 Papers, 2005, 408–416.
Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J., Scape: shape completion and animation of people. ACM SIGGRAPH 2005 Papers, 2005, 408–416.
Asgari Taghanaki, S., Abhishek, K., Cohen, J.P., Cohen-Adad, J., Hamarneh, G., Deep semantic segmentation of natural and medical images: a review. Artif. Intell. Rev. 54 (2021), 137–178.
Ballerini, L., Fisher, R.B., Aldridge, B., Rees, J., A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions. Celebi, M.E., Schaefer, G., (eds.) Color Medical Image Analysis, Vol. 6, 2013, Springer Netherlands, 63–86.
Baur, C., Albarqouni, S., Navab, N., Generating highly realistic images of skin lesions with GANs. OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis: First International Workshop, OR 2.0 2018, 5th International Workshop, CARE 2018, 7th International Workshop, CLIP 2018, Third International Workshop, ISIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16 and 20, 2018, Proceedings 5, 2018, Springer, 260–267.
Bi, L., Kim, J., Kumar, A., Feng, D., Fulham, M., Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment: Fifth International Workshop, CMMI 2017, Second International Workshop, RAMBO 2017, and First International Workshop, SWITCH 2017, Held in Conjunction with MICCAI 2017, QuÉBec City, QC, Canada, September 14, 2017, Proceedings 5, 2017, Springer, 43–51.
Bickers, D.R., Lim, H.W., Margolis, D., Weinstock, M.a., Goodman, C., Faulkner, E., Gould, C., Gemmen, E., Dall, T., The burden of skin diseases: 2004. A joint project of the American Academy of Dermatology Association and the Society for Investigative Dermatology. J. Am. Acad. Dermatol. 55:3 (2006), 490–500, 10.1016/j.jaad.2006.05.048.
Bissoto, A., Perez, F., Valle, E., Avila, S., Skin lesion synthesis with generative adversarial networks. OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis: First International Workshop, OR 2.0 2018, 5th International Workshop, CARE 2018, 7th International Workshop, CLIP 2018, Third International Workshop, ISIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16 and 20, 2018, Proceedings 5, 2018, Springer, 294–302.
Buza, E., Akagic, A., Omanovic, S., 2017. Skin detection based on image color segmentation with histogram and K-means clustering. In: International Conference on Electrical and Electronics Engineering. ISBN: 9786050107371, pp. 1181–1186.
Celebi, M.E., Codella, N., Halpern, A., Dermoscopy image analysis: Overview and future directions. IEEE J. Biomed. Health Inf. 23:2 (2019), 474–478, 10.1109/JBHI.2019.2895803.
Chen, X., Mottaghi, R., Liu, X., Fidler, S., Urtasun, R., Yuille, A., 2014. Detect what you can: Detecting and representing objects using holistic models and body parts. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1971–1978.
Chuquicusma, M.J., Hussein, S., Burt, J., Bagci, U., How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, IEEE, 240–244.
Combalia, M., Codella, N.C., Rotemberg, V., Helba, B., Vilaplana, V., Reiter, O., Carrera, C., Barreiro, A., Halpern, A.C., Puig, S., et al. BCN20000: Dermoscopic lesions in the wild. 2019 arXiv preprint arXiv:1908.02288.
Crombie, I., Distribution of malignant melanoma on the body surface. Br. J. Cancer 43:6 (1981), 842–849.
Crum, W.R., Camara, O., Hill, D.L.G., Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging, 25, 2006.
Curiel-Lewandrowski, C., Novoa, R.A., Berry, E., Celebi, M.E., Codella, N., Giuste, F., Gutman, D., Halpern, A., Leachman, S., Liu, Y., et al. Artificial intelligence approach in melanoma. Melanoma, 2019, 1–31.
Dai, F., Zhang, D., Su, K., Xin, N., Burn images segmentation based on burn-GAN. J. Burn Care Res. 42:4 (2021), 755–762.
Daneshjou, R., Yuksekgonul, M., Cai, Z.R., Novoa, R., Zou, J., SkinCon: A skin disease dataset densely annotated by domain experts for fine-grained model debugging and analysis. 2023 arXiv preprint arXiv:2302.00785.
Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Cukur, T., Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38:10 (2019), 2375–2388.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L., Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, Ieee, 248–255.
Fang, H.-S., Lu, G., Fang, X., Xie, J., Tai, Y.-W., Lu, C., Weakly and semi supervised human body part parsing via pose-guided knowledge transfer. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2018, IEEE Computer Society, 70–78.
Finlayson, G.D., Trezzi, E., Shades of gray and colour constancy. Color and Imaging Conference, Vol. 2004, 2004, Society for Imaging Science and Technology, 37–41.
Fitzpatrick, T.B., Soleil et peau. J. Méd. Esthét. 2 (1975), 33–34.
Fitzpatrick, T.B., The validity and practicality of sun-reactive skin types I through VI. Arch. Dermatol. 124:6 (1988), 869–871.
Fried, L., Tan, A., Bajaj, S., Liebman, T.N., Polsky, D., Stein, J.A., Technological advances for the detection of melanoma: Advances in diagnostic techniques. J. Am. Acad. Dermatol. 83:4 (2020), 983–992.
Gal, R., Alaluf, Y., Atzmon, Y., Patashnik, O., Bermano, A.H., Chechik, G., Cohen-Or, D., An image is worth one word: Personalizing text-to-image generation using textual inversion. 2022, 10.48550/ARXIV.2208.01618 URL https://arxiv.org/abs/2208.01618.
Gandini, S., Sera, F., Cattaruzza, M.S., Pasquini, P., Abeni, D., Boyle, P., Melchi, C.F., Meta-analysis of risk factors for cutaneous melanoma: I. Common and atypical naevi. Eur. J. Cancer 41:1 (2005), 28–44, 10.1016/j.ejca.2004.10.015.
Gholami, P., Ahmadi-Pajouh, M.A., Abolftahi, N., Hamarneh, G., Kayvanrad, M., Segmentation and measurement of chronic wounds for bioprinting. IEEE J. Biomed. Health Inform. 22:4 (2017), 1269–1277.
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., Generative adversarial networks. 2014 arXiv preprint arXiv:1406.2661.
Groh, M., Harris, C., Soenksen, L., Lau, F., Han, R., Kim, A., Koochek, A., Badri, O., 2021. Evaluating Deep Neural Networks Trained on Clinical Images in Dermatology with the Fitzpatrick 17k Dataset. In: ISIC Skin Image Analysis CVPR Workshop. pp. 1–9.
Hasan, M.K., Ahamad, M.A., Yap, C.H., Yang, G., A survey, review, and future trends of skin lesion segmentation and classification. Comput. Biol. Med., 155, 2023, 106624, 10.1016/j.compbiomed.2023.106624 URL https://www.sciencedirect.com/science/article/pii/S0010482523000896.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
Karras, T., Laine, S., Aila, T., 2019. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4401–4410.
Kawahara, J., Daneshvar, S., Argenziano, G., Hamarneh, G., Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J. Biomed. Health Inf. 23:2 (2018), 538–546.
Kazeminia, S., Baur, C., Kuijper, A., van Ginneken, B., Navab, N., Albarqouni, S., Mukhopadhyay, A., GANs for medical image analysis. Artif. Intell. Med., 109, 2020, 101938.
Kiefer, J., Wolfowitz, J., Stochastic estimation of the maximum of a regression function. Ann. Math. Stat., 1952, 462–466.
Kingma, D., Ba, J., 2015. Adam: A Method for Stochastic Optimization. In: International Conference on Learning Representations. pp. 1–15.
Kinyanjui, N.M., Odonga, T., Cintas, C., Codella, N.C., Panda, R., Sattigeri, P., Varshney, K.R., Estimating skin tone and effects on classification performance in dermatology datasets. 2019 arXiv preprint arXiv:1910.13268.
Kohli, M.D., Summers, R.M., Geis, J.R., Medical image data and datasets in the era of machine learning – Whitepaper from the 2016 C-MIMI meeting dataset session. J. Digit. Imaging 30 (2017), 392–399.
Kornblith, S., Shlens, J., Le, Q.V., 2019. Do better imagenet models transfer better?. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2661–2671.
Krizhevsky, A., Sutskever, I., Hinton, G.E., Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst., 25, 2012.
Li, Y., Esteva, A., Kuprel, B., Novoa, R., Ko, J., Thrun, S., 2017. Skin cancer detection and tracking using data synthesis and deep learning. In: AAAI Conference on Artificial Intelligence Joint Workshop on Health Intelligence. pp. 551–554,.
Li, H., Pan, Y., Zhao, J., Zhang, L., Skin disease diagnosis with deep learning: A review. Neurocomputing 464 (2021), 364–393, 10.1016/j.neucom.2021.08.096 arXiv:2011.05627.
Liang, J., Yang, X., Huang, Y., Li, H., He, S., Hu, X., Chen, Z., Xue, W., Cheng, J., Ni, D., Sketch guided and progressive growing GAN for realistic and editable ultrasound image synthesis. Med. Image Anal., 79, 2022, 102461.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L., Microsoft coco: Common objects in context. Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, 2014, Springer, 740–755.
Mahendran, A., Vedaldi, A., 2015. Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5188–5196.
McCormac, J., Handa, A., Leutenegger, S., Davison, A.J., SceneNet RGB-D: Can 5M synthetic images beat generic ImageNet pre-training on indoor segmentation?. IEEE ICCV, 2017, 2697–2706.
Mendonça, T., Ferreira, P.M., Marques, J.S., Marcal, A.R.S., Rozeira, J., PH2 - A dermoscopic image database for research and benchmarking. IEEE Engineering in Medicine and Biology Society, 2013, 5437–5440.
Mirikharaji, Z., Barata, C., Abhishek, K., Bissoto, A., Avila, S., Valle, E., Celebi, M.E., Hamarneh, G., A survey on deep learning for skin lesion segmentation. 2022 arXiv preprint arXiv:2206.00356.
Nie, X., Feng, J., Yan, S., 2018. Mutual learning to adapt for joint human parsing and pose estimation. In: Proceedings of the European Conference on Computer Vision. ECCV, pp. 502–517.
Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., Shen, D., Medical image synthesis with context-aware generative adversarial networks. Medical Image Computing and Computer Assisted Intervention- MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20, 2017, Springer, 417–425.
Pearl, D.K., Scott, E.L., The anatomical distribution of skin cancers. Int. J. Epidemiol. 15:4 (1986), 502–506.
Pollastri, F., Bolelli, F., Paredes, R., Grana, C., Augmenting data with GANs to segment melanoma skin lesions. Multimedia Tools Appl. 79 (2020), 15575–15592.
Ravi, N., Reizenstein, J., Novotny, D., Gordon, T., Lo, W.-Y., Johnson, J., Gkioxari, G., Accelerating 3D deep learning with PyTorch3D. 2020, 1–18 arXiv:2007.08501.
Rayner, J.E., Laino, A.M., Nufer, K.L., Adams, L., Raphael, A.P., Menzies, S.W., Soyer, H.P., Clinical perspective of 3D total body photography for early detection and screening of melanoma. Front. Med., 5, 2018, 10.3389/fmed.2018.00152 URL https://www.frontiersin.org/article/10.3389/fmed.2018.00152.
Ren, S., He, K., Girshick, R., Sun, J., Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39:6 (2016), 1137–1149 arXiv:1506.01497.
Reni, R., House rooms image dataset. 2022 Accessed: 2022-05-17. https://www.kaggle.com/datasets/robinreni/house-rooms-image-dataset.
Robbins, H., Monro, S., A stochastic approximation method. Ann. Math. Stat. 22:3 (1951), 400–407, 10.1214/aoms/1177729586.
Rotemberg, V., Kurtansky, N., Betz-Stablein, B., Caffery, L., Chousakos, E., Codella, N., Combalia, M., Dusza, S., Guitera, P., Gutman, D., et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Sci. Data, 8(1), 2021, 34.
Saint, A., Ahmed, E., Shabayek, A.E.R., Cherenkova, K., Gusev, G., Aouada, D., Ottersten, B., 2018. 3DBodyTex: Textured 3D body dataset. In: International Conference on 3D Vision. pp. 495–504.
Saint, A., Shabayek, A.E.R., Cherenkova, K., Gusev, G., Aouada, D., Ottersten, B., 2019. BODYFITR: Robust automatic 3D human body fitting. In: IEEE International Conference on Image Processing. pp. 484–488.
Shamsolmoali, P., Zareapoor, M., Granger, E., Zhou, H., Wang, R., Celebi, M.E., Yang, J., Image synthesis with adversarial networks: A comprehensive survey and case studies. Inf. Fusion 72 (2021), 126–146.
Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S., 2014. CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 806–813.
Sitaru, S., Oueslati, T., Schielein, M.C., Weis, J., Kaczmarczyk, R., Rueckert, D., Biedermann, T., Zink, A., Automatic body part identification in real-world clinical dermatological images using machine learning. JDDG: J. Dtsch. Dermatol. Ges., 2023.
Skandarani, Y., Jodoin, P.-M., Lalande, A., GANs for medical image synthesis: An empirical study. J. Imaging, 9(3), 2023, 69, 10.3390/jimaging9030069.
Sondermann, W., Utikal, J.S., Enk, A.H., Schadendorf, D., Klode, J., Hauschild, A., Weichenthal, M., French, L.E., Berking, C., Schilling, B., Haferkamp, S., Fröhling, S., von Kalle, C., Brinker, T.J., Prediction of melanoma evolution in melanocytic nevi via artificial intelligence: A call for prospective data. Eur. J. Cancer 119 (2019), 30–34, 10.1016/j.ejca.2019.07.009 URL https://www.sciencedirect.com/science/article/pii/S0959804919304095.
Sun, X., Yang, J., Sun, M., Wang, K., 2016. A Benchmark for Automatic Visual Classification of Clinical Skin Disease Images. In: European Conference on Computer Vision. pp. 206–222.
Tan, W.R., Chan, C.S., Yogarajah, P., Condell, J., A fusion approach for efficient human skin detection. IEEE Trans. Ind. Inform. 8:1 (2012), 138–147, 10.1109/TII.2011.2172451.
The GIMP Development Team, GIMP v2.10.30. https://www.gimp.org.
The University of Edinburgh, Dermofit Image Library. https://licensing.eri.ed.ac.uk/i/software/dermofit-image-library.html.
Tom, F., Sheet, D., Simulating patho-realistic ultrasound images using deep generative networks with adversarial learning. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, IEEE, 1174–1177.
Tschandl, P., Rosendahl, C., Kittler, H., The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5 (2018), 1–9 arXiv:1803.10417.
Wang, T., Lei, Y., Fu, Y., Wynne, J.F., Curran, W.J., Liu, T., Yang, X., A review on medical imaging synthesis using deep learning and its clinical applications. J. Appl. Clin. Med. Phys. 22:1 (2021), 11–36.
Wang, Z., She, Q., Ward, T.E., Generative adversarial networks in computer vision: A survey and taxonomy. ACM Comput. Surv. 54:2 (2021), 1–38.
Wang, Y., Yu, B., Wang, L., Zu, C., Lalush, D.S., Lin, W., Wu, X., Zhou, J., Shen, D., Zhou, L., 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage 174 (2018), 550–562.
Wang, W., Zhu, H., Dai, J., Pang, Y., Shen, J., Shao, L., 2020b. Hierarchical human parsing with typed part-relation reasoning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8929–8939.
Wen, D., Khan, S.M., Xu, A.J., Ibrahim, H., Smith, L., Caballero, J., Zepeda, L., de Blas Perez, C., Denniston, A.K., Liu, X., et al. Characteristics of publicly available skin cancer image datasets: a systematic review. Lancet Digit. Health 4:1 (2022), e64–e74.
Wood, E., Baltrušaitis, T., Hewitt, C., Dziadzio, S., Johnson, M., Estellers, V., Cashman, T.J., Shotton, J., 2021. Fake It Till You Make It: Face analysis in the wild using synthetic data alone. In: International Conference on Computer Vision. pp. 3681–3688.
Wu, Y., Zeng, D., Xu, X., Shi, Y., Hu, J., Fairprune: Achieving fairness through pruning for dermatological disease diagnosis. Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part I, 2022, Springer, 743–753.
Yang, X., Medical Image Synthesis: Methods and Clinical Applications. 2023.
Yi, X., Walia, E., Babyn, P., Generative adversarial network in medical imaging: A review. Med. Image Anal., 58, 2019, 101552.
Yogarajah, P., Condell, J., Curran, K., Cheddad, A., McKevitt, P., A dynamic threshold approach for skin segmentation in color images. International Conference on Image Processing, 2010, IEEE, 2225–2228.
Youl, P.H., Janda, M., Aitken, J., Del Mar, C.B., Whiteman, D.C., Baade, P., Body-site distribution of skin cancer, pre-malignant and common benign pigmented lesions excised in general practice. Br. J. Dermatol. 165:1 (2011), 35–43.
Young, A.T., Vora, N.B., Cortez, J., Tam, A., Yeniay, Y., Afifi, L., Yan, D., Nosrati, A., Wong, A., Johal, A., et al. The role of technology in melanoma screening and diagnosis. Pigment Cell Melanoma Res. 34:2 (2021), 288–300.
Zhang, L., Wen, T., Shi, J., Deep image blending. Winter Conference on Applications of Computer Vision, 2020, IEEE Computer Society, Los Alamitos, CA, USA, 231–240.
Zhao, M., Kawahara, J., Abhishek, K., Shamanian, S., Hamarneh, G., Skin3D: Detection and longitudinal tracking of pigmented skin lesions in 3D total-body textured meshes. Med. Image Anal., 77, 2022, 102329.