<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">tuzsut</journal-id><journal-title-group><journal-title xml:lang="ru">Труды учебных заведений связи</journal-title><trans-title-group xml:lang="en"><trans-title>Proceedings of Telecommunication Universities</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">1813-324X</issn><issn pub-type="epub">2712-8830</issn><publisher><publisher-name>СПбГУТ</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.31854/1813-324X-2025-11-2-7-19</article-id><article-id custom-type="edn" pub-id-type="custom">TKAPTM</article-id><article-id custom-type="elpub" pub-id-type="custom">tuzsut-665</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>ЭЛЕКТРОНИКА, ФОТОНИКА, ПРИБОРОСТРОЕНИЕ И СВЯЗЬ</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>ELECTRONICS, PHOTONICS, INSTRUMENTATION AND COMMUNICATIONS</subject></subj-group></article-categories><title-group><article-title>Гибридный метод локального контрастирования изображений с нейросетевой регулировкой параметров</article-title><trans-title-group xml:lang="en"><trans-title>A Hybrid Approach to Local Contrast  Enhancement Using Adaptive Neural Network Parameter Control</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0009-0007-4916-1816</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Грицкевич</surname><given-names>И. Ю.</given-names></name><name name-style="western" xml:lang="en"><surname>Gritskevich</surname><given-names>I. Yu.</given-names></name></name-alternatives><bio xml:lang="ru"><p>аспирант кафедры телевидения и метрологии Санкт-Петербургского государственного университета телекоммуникаций им. проф. М.А. Бонч-Бруевича</p></bio><email xlink:type="simple">gritskevich.iu@sut.ru</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Санкт-Петербургский государственный университет телекоммуникаций им. проф. М.А. Бонч-Бруевича<country>Россия</country></aff><aff xml:lang="en">The Bonch-Bruevich Saint Petersburg State University of Telecommunications<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>07</day><month>05</month><year>2025</year></pub-date><volume>11</volume><issue>2</issue><fpage>7</fpage><lpage>19</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Грицкевич И.Ю., 2025</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="ru">Грицкевич И.Ю.</copyright-holder><copyright-holder xml:lang="en">Gritskevich I.Y.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://tuzs.sut.ru/jour/article/view/665">https://tuzs.sut.ru/jour/article/view/665</self-uri><abstract><sec><title>Актуальность</title><p>Актуальность. Современные методы обработки изображений направлены на повышение их визуального качества, в частности, на адаптивное локальное контрастирование. Для достижения высокой эффективности контрастирования ранее применялись классические алгоритмы, однако они не учитывали глобальный контекст сцены и могли приводить к усилению шумовых искажений. В связи с этим в данной работе предложен гибридный метод адаптивного локального контрастирования изображений с использованием нейросетевой регулировки параметров.</p><p>Целью статьи является разработка алгоритма, обеспечивающего оптимальное усиление контраста при минимизации шумовых артефактов и искажений, повышение контрастности и точности обнаружения объектов в режиме реального времени. </p></sec><sec><title>Сущность решения</title><p>Сущность решения: адаптивная настройка начальных параметров локального контрастирования с помощью сверточной нейронной сети, учитывающей яркостные и текстурные особенности. Сверточная нейронная сеть динамически подбирает параметры обработки и размеры локальных областей для объектов и фона, улучшая видимость деталей и подавляя артефакты обработки (ореолы, блочность). Метод реализован в виде программно-аппаратного комплекса для компьютерного зрения, обработки аэрофотоснимков, видеонаблюдения и поиска пострадавших при катаклизмах.</p><p>Научная новизна работы заключается в разработке алгоритма, позволяющего автоматически регулировать параметры контрастирования на основе анализа глобального и локального контекста сцены с использованием искусственного интеллекта.</p><p>Теоретическая значимость работы состоит в гибридном подходе к адаптивной обработке изображений, основанном на применении сверточной нейронной сети для управления параметрами локального контрастирования. Управление параметрами осуществляется на основе анализа текстурных и частотных характеристик изображения, автоматическую адаптацию под которые производит нейронная сеть. Методика обеспечивает адаптацию к нестационарным условиям наблюдения и, как следствие, повышает устойчивость алгоритма к сложным условиям.</p><p>Практическая значимость разработанного алгоритма определяется реализацией повышения контраста объектов изображений, полученных в видимом и инфракрасном диапазонах спектра и достоверностью их распознавания с использованием искусственного интеллекта.</p></sec></abstract><trans-abstract xml:lang="en"><p>Relevance. Modern image processing techniques are focused on enhancing visual quality, particularly through adaptive local contrast enhancement. Previously, classical algorithms were employed to achieve high contrast efficiency; however, these approaches failed to account for the global scene context and often led to noise amplification. This paper proposes a hybrid method for adaptive local image contrast enhancement utilizing neural network-based parameter adjustment.</p><p>The aim of this research is to develop an algorithm that provides optimal contrast enhancement while minimizing noise artifacts and distortions, thereby improving contrast and real-time object detection accuracy.</p><p>The essence of the proposed solution lies in employing a convolutional neural network for automatic configuration of local contrast parameters based on statistical brightness characteristics and textural image features. The proposed method incorporates image segmentation into local regions, analysis of their properties, and adaptive adjustment of processing parameters. This results in improved discernibility of low-contrast objects under various imaging conditions. The algorithm's operating principle is based on dynamically selecting local region dimensions and contrast parameters depending on background and target scene objects. The integration of a neural network module enables precise adjustment of processing parameters while minimizing undesirable artifacts such as halos and blockiness. The methodology has been implemented as software and hardware for an optoelectronic system designed for computer vision applications, aerial image processing, video surveillance systems, and locating victims in various disaster scenarios.</p><p>The scientific novelty of this work lies in the development of an algorithm that automatically regulates contrast parameters based on analysis of both global and local scene context using artificial intelligence.</p><p>The theoretical significance of the work consists in the development of a contrast enhancement algorithm and image quality assessment method that accounts for contrast perception characteristics by both humans and AI systems under challenging observational conditions, such as fog, smoke, low illumination, etc.</p><p>The practical significance of the developed algorithm is determined by its implementation of contrast enhancement for objects in images acquired in both visible and infrared spectral ranges, and by the reliability of their recognition using artificial intelligence.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>контраст</kwd><kwd>искажения</kwd><kwd>оценка качества изображений</kwd><kwd>обработка изображений</kwd><kwd>нейронные сети</kwd><kwd>частотные характеристики</kwd><kwd>локальная оценка</kwd><kwd>адаптация</kwd><kwd>локальные особенности</kwd></kwd-group><kwd-group xml:lang="en"><kwd>contrast</kwd><kwd>distortion</kwd><kwd>image quality assessment</kwd><kwd>image processing</kwd><kwd>neural networks</kwd><kwd>frequency characteristics</kwd><kwd>local evaluation</kwd><kwd>adaptation</kwd><kwd>local features</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Jobson D.J., Rahman Z., Woodell G.A. Properties and performance of a center/surround retinex // IEEE Transactions on Image Processing. 1997. Vol. 6. Iss. 3. PP. 451‒462. DOI:10.1109/83.557356</mixed-citation><mixed-citation xml:lang="en">Jobson D.J., Rahman Z., Woodell G.A. Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing. 1997;6(3):451‒462. DOI:10.1109/83.557356</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Chen Y.S., Wang Y.C., Kao M.H., Chuang Y.Y. Deep Photo Enhancer: Unpaired Learning for Image Enhancement from Photographs with GANs // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR, Salt Lake City, USA, 18‒23 June 2018). IEEE, 2018. DOI:10.1109/CVPR.2018.00660</mixed-citation><mixed-citation xml:lang="en">Chen Y.S., Wang Y.C., Kao M.H., Chuang Y.Y. Deep Photo Enhancer: Unpaired Learning for Image Enhancement from Photographs with GANs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 18‒23 June 2018, Salt Lake City, USA. IEEE; 2018. DOI:10.1109/CVPR.2018.00660</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Paris S., Hasinoff S.W., Kautz J. Local Laplacian filtering: edge-aware image processing with a Laplacian pyramid // Proceedings of the Conference on Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH '11, Vancouver, Canada, 7‒11 August 2011). New York: Association for Computing Machinery, 2011. URL: https://people.csail.mit.edu/sparis/publi/2011/siggraph/Paris_11_Local_Laplacian_Filters_lowres.pdf (Accessed 25.04.2025)</mixed-citation><mixed-citation xml:lang="en">Paris S., Hasinoff S.W., Kautz J. Local Laplacian filtering: edge-aware image processing with a Laplacian pyramid. Proceedings of the Conference on Special Interest Group on Computer Graphics and Interactive Techniques, SIGGRAPH '11, 7‒11 August 2011, Vancouver, Canada. New York: Association for Computing Machinery; 2011. URL: https://people.csail.mit.edu/sparis/publi/2011/siggraph/Paris_11_Local_Laplacian_Filters_lowres.pdf [Accessed 25.04.2025]</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Грицкевич И.Ю., Гоголь А.А. Алгоритм безэталонной оценки качества изображений // Труды учебных заведений связи. 2024. Т. 10. № 2. С. 16‒23. DOI:10.31854/1813-324X-2024-10-2-16-23. EDN:TTPABW</mixed-citation><mixed-citation xml:lang="en">Gritskevich I., Gogol A. No-Reference Image Quality Assessment Algorithm. Proceedings of Telecommunication Universities. 2024;10(2):16‒23. (in Russ.) DOI:10.31854/1813-324X-2024-10-2-16-23. EDN:TTPABW</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Rec. ITU-R BT.500-11. Methodology for subjective assessment of the quality of television pictures. ITU-T. 2002. (23)</mixed-citation><mixed-citation xml:lang="en">Rec. ITU-R BT.500-11. Methodology for subjective assessment of the quality of television pictures. ITU-T. 2002.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Шелепин Ю.Е. Введение в нейроиконику. СПб.: Троицкий мост, 2017. 352 с. EDN:YNTJRJ</mixed-citation><mixed-citation xml:lang="en">Shelepin Yu.E. Introduction to Neuroiconics. St. Petersburg: Troickij most Publ.; 2017. 352 p. (in Russ.) EDN:YNTJRJ</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Kim Y.T. Contrast enhancement using brightness preserving bi-histogram equalization // IEEE Transactions on Consumer Electronics. 1997. Vol. 43. Iss. 1. DOI:10.1109/30.580378</mixed-citation><mixed-citation xml:lang="en">Kim Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE Transactions on Consumer Electronics. 1997;43:1. DOI:10.1109/30.580378</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Rahman Z., Jobson D.J., Woodell G.A. Multi-scale retinex for color image enhancement // Proceedings of the 3rd International Conference on Image Processing (Lausanne, Switzerland, 19 September 1996). IEEE, 1996. DOI:10.1109/ICIP.1996.560995</mixed-citation><mixed-citation xml:lang="en">Rahman Z., Jobson D.J., Woodell G.A. Multi-scale retinex for color image enhancement. Proceedings of 3rd IEEE International Conference on Image Processing, 19 September 1996, Lausanne, Switzerland. IEEE; 1996. DOI:10.1109/ICIP.1996.560995</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Ying Z., Li G., Gao W. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement // arXiv preprint arXiv:1711.00591. 2017. DOI:10.48550/arXiv.1711.00591</mixed-citation><mixed-citation xml:lang="en">Ying Z., Li G., Gao W. A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement. arXiv pre-print arXiv:1711.00591. 2017. DOI:10.48550/arXiv.1711.00591</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Vala H.J., Baxi A. A review on Otsu image segmentation algorithm // International Journal of Advanced Research in Computer Engineering &amp; Technology. 2013. Vol. 2. Iss. 2. PP. 387‒389.</mixed-citation><mixed-citation xml:lang="en">Vala H.J., Baxi A. A review on Otsu image segmentation algorithm. International Journal of Advanced Research in Computer Engineering &amp; Technology. 2013;2(2):387‒389.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Cybenko G. Approximation by superpositions of a sigmoidal function // Mathematics of Control, Signals and Systems. 1989. Vol. 2. PP. 303–314. DOI:10.1007/bf02551274. EDN:OKSIPR</mixed-citation><mixed-citation xml:lang="en">Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems. 1989;2:303–314. DOI:10.1007/bf02551274. EDN:OKSIPR</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Wang R., Zhang Q., Fu C.W., Shen X., Zheng W.S., Jia J. Underexposed Photo Enhancement Using Deep Illumination Estimation // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR, Long Beach, USA, 15‒20 June 2019). IEEE, 2019. DOI:10.1109/CVPR.2019.00701 (13)</mixed-citation><mixed-citation xml:lang="en">Wang R., Zhang Q., Fu C.W., Shen X., Zheng W.S., Jia J. Underexposed Photo Enhancement Using Deep Illumination Estimation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 15‒20 June 2019, Long Beach, USA. IEEE; 2019. DOI:10.1109/CVPR.2019.00701</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Han Y., Ye J.C. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT // IEEE Transactions on Medical Imaging. 2018. Vol. 37. Iss. 6. PP. 1418‒1429. DOI:10.1109/TMI.2018.2823768</mixed-citation><mixed-citation xml:lang="en">Han Y., Ye J.C. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT. IEEE Transactions on Medical Imaging. 2018;37(6):1418‒1429. DOI:10.1109/TMI.2018.2823768</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">What is Histogram Equalization and how it works? // Great Learning. 2024. URL: https://www.mygreatlearning.com/blog/histogram-equalization-explained (Accessed 25.04.2025)</mixed-citation><mixed-citation xml:lang="en">Great Learning. What is Histogram Equalization and how it works? 2024. URL: https://www.mygreatlearning.com/blog/histogram-equalization-explained [Accessed 25.04.2025]</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Liu J., Li D., Yuan C., Luo B., Wu G. A low-light image enhancement method with brightness balance and detail preservation // PLoS One. 2022. Vol. 17. Iss. 5. P. e0262478. DOI:10.1371/journal.pone.0262478. EDN:DFDSOY</mixed-citation><mixed-citation xml:lang="en">Liu J., Li D., Yuan C., Luo B., Wu G. A low-light image enhancement method with brightness balance and detail preservation. PLoS One. 2022;17(5):e0262478. DOI:10.1371/journal.pone.0262478. EDN:DFDSOY</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Pizer S.M., Amburn E.P., Austin J.D., Cromartie R., Geselowitz A., Greer T., et al. Adaptive histogram equalization and its variations // Computer Vision, Graphics, and Image Processing. 1987. Vol. 39. Iss. 3. PP. 355‒368. DOI:10.1016/S0734-189X(87)80186-X</mixed-citation><mixed-citation xml:lang="en">Pizer S.M., Amburn E.P., Austin J.D., Cromartie R., Geselowitz A., Greer T., et al. Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing. 1987;39(3):355‒368 DOI:10.1016/S0734-189X(87)80186-X</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Kaur M., Kaur J., Kaur J. Survey of Contrast Enhancement Techniques based on Histogram Equalization // International Journal of Advanced Computer Science and Applications. 2011. Vol. 2. Iss. 7. DOI:10.14569/IJACSA.2011.020721</mixed-citation><mixed-citation xml:lang="en">Kaur M., Kaur J., Kaur J. Survey of Contrast Enhancement Techniques based on Histogram Equalization. International Journal of Advanced Computer Science and Applications. 2011;2(7). DOI:10.14569/IJACSA.2011.020721</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">Zuiderveld K. VIII.5. ‒ Contrast Limited Adaptive Histogram Equalization // In: Heckbert P.S. (ed.) Graphics Gems IV. Academic Press, 1994. PP. 474‒485. DOI:10.1016/B978-0-12-336156-1.50061-6</mixed-citation><mixed-citation xml:lang="en">Zuiderveld K. VIII.5. ‒ Contrast Limited Adaptive Histogram Equalization. In: Heckbert P.S. (ed.) Graphics Gems IV. Academic Press; 1994. p.474‒485. DOI:10.1016/B978-0-12-336156-1.50061-6</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Lore K.G., Akintayo A., Sarkar S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement // Pattern Recognition. 2017. Vol. 61. PP. 650‒662. DOI:10.1016/j.patcog.2016.06.008</mixed-citation><mixed-citation xml:lang="en">Lore K.G., Akintayo A., Sarkar S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. Pattern Recognition. 2017;61:650‒662. DOI:10.1016/j.patcog.2016.06.008</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">Wei C., Wang W., Yang W., Liu J. Deep Retinex Decomposition for Low-Light Enhancement. 2018. URL: http://39.96.165.147/Pub%20Files/2018/chen_bmvc18.pdf (Accessed 25.04.2025)</mixed-citation><mixed-citation xml:lang="en">Wei C., Wang W., Yang W., Liu J. Deep Retinex Decomposition for Low-Light Enhancement. 2018. URL: http://39.96.165.147/Pub%20Files/2018/chen_bmvc18.pdf [Accessed 25.04.2025]</mixed-citation></citation-alternatives></ref><ref id="cit21"><label>21</label><citation-alternatives><mixed-citation xml:lang="ru">Liu X., Ma Y., Shi Z., Chen J. GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing // Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV, Seoul, South Korea, 27 October ‒ 02 November 2019). IEEE, 2019. DOI:10.1109/ICCV.2019.00741</mixed-citation><mixed-citation xml:lang="en">Liu X., Ma Y., Shi Z., Chen J. GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing. Proceedings of the IEEE/CVF International Conference on Computer Vision, ICCV, 27 October ‒ 02 November 2019, Seoul, South Korea. IEEE; 2019. DOI:10.1109/ICCV.2019.00741</mixed-citation></citation-alternatives></ref><ref id="cit22"><label>22</label><citation-alternatives><mixed-citation xml:lang="ru">Guo C., Li C., Guo J., Loy C.C., Hou J., Kwong S., Cong R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR, Seattle, USA, 13‒19 June 2020). IEEE, 2020. DOI:10.1109/CVPR42600.2020.00185</mixed-citation><mixed-citation xml:lang="en">Guo C., Li C., Guo J., Loy C.C., Hou J., Kwong S., Cong R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 13‒19 June 2020, Seattle, USA. IEEE; 2020. DOI:10.1109/CVPR42600.2020.00185</mixed-citation></citation-alternatives></ref><ref id="cit23"><label>23</label><citation-alternatives><mixed-citation xml:lang="ru">Hossain F., Alsharif M.R. Image Enhancement Based on Logarithmic Transform Coefficient and Adaptive Histogram Equalization // Proceedings of the International Conference on Convergence Information Technology (ICCIT 2007, Gwangju, South Korea, 21‒23 November 2007). IEEE, 2007. DOI:10.1109/ICCIT.2007.4420457</mixed-citation><mixed-citation xml:lang="en">Hossain F., Alsharif M.R. Image Enhancement Based on Logarithmic Transform Coefficient and Adaptive Histogram Equalization. Proceedings of the 2007 International Conference on Convergence Information Technology, ICCIT 2007, 21‒23 November 2007, Gwangju, South Korea. IEEE; 2007. DOI:10.1109/ICCIT.2007.4420457</mixed-citation></citation-alternatives></ref><ref id="cit24"><label>24</label><citation-alternatives><mixed-citation xml:lang="ru">Stark J.A. Adaptive image contrast enhancement using generalizations of histogram equalization // IEEE Transactions on Image Processing. 2000. Vol. 9. Iss. 5. PP. 889‒896. DOI:10.1109/83.841534</mixed-citation><mixed-citation xml:lang="en">Stark J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Transactions on Image Processing. 2000;9(5):889‒896. DOI:10.1109/83.841534</mixed-citation></citation-alternatives></ref><ref id="cit25"><label>25</label><citation-alternatives><mixed-citation xml:lang="ru">Потапова А.А. Новейшие методы обработки изображений. М.: ФИЗМАТЛИТ, 2008. 496 с.</mixed-citation><mixed-citation xml:lang="en">Potapova A.A. The Latest Methods of Image Processing. Moscow: FIZMATLIT Publ.; 2008. 496 p. (in Russ.)</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
