EDITORIAL
INFORMATION TECHNOLOGIES AND TELECOMMUNICATION
Relevant. Nowadays to detect signs of abnormal traffic behavior signature analysis is used, but this method has its limitations. Given the disadvantages of signature analysis, it becomes clear that using this method alone can limit the ability to detect and prevent new and unknown anomalies. Considered implementation of a custom analysis in addition to the signature to provide a more complete and reliable information system protection.
The aim of the study is to increase the efficiency of detecting signs of abnormal traffic behavior through the use of artificial intelligence methods.
In result the following were developed: an algorithm for detecting network anomalies, a software tool "Detection of network anomalies based on methods of artificial intelligence", a software stand.
The novelty of the study lies in the fact that the software allows you to calculate the criteria for detecting anomalies of network traffic in a period of time shorter than that of previously presented analogs and allows you to detect various anomalies without prior training on ready-made anomaly templates.
The practical significance. The results obtained in the work can be used for classification of anomalies of network traffic in information systems and infrastructures.
Relevant. Current trends in information security research are driven by the growing number of threats associated with the leakage of speech information from premises via acoustic and vibroacoustic channels. Current methods of passive and active protection of premises, based on noise generation and the use of libraries of pre-recorded audio signals, do not always achieve the required level of protection due to a lack of adaptation to the parameters of real speech and the acoustic environment in the premises, as well as due to disregard for regulatory requirements regarding indoor noise levels. All this has necessitated the development of new algorithms for the active protection of office premises from leakage, particularly via acoustic-vibrational channels, based on the use of speech signals from participants in negotiations.
The aim of the study is to ensure the required verbal intelligibility coefficient at the boundary of a controlled office area by improving active information protection systems through the development and application of an algorithm for generating speech-like interference that is adaptive to changes in the speech parameters of negotiators, the office environment, and noise level regulations. Information theory and digital signal processing methods were used to solve these problems.
In result an algorithm for generating adaptive speech-like interference for use in active information protection systems in offices has been developed. The proposed algorithm for generating adaptive speech-like interference allows it to be generated from the speech signals of negotiators only when they are present, thereby enhancing the interference's masking properties.
The novelty of the study lies in the introduction of multichannel procedures for generating adaptive speech-like interference.
The theoretical significance of this research lies in expanding our understanding of methods, models, and techniques for adaptive acoustic masking of speech information and developing an algorithm for generating speech-like interference based on speech parameter analysis.
Practical significance. The developed algorithm can be implemented using standard computing devices and acoustic systems. This makes it applicable to both current and future active speech protection systems.
In the context of the rapid development of the digital economy, the architectures of digital platforms have become a key subject of scientific and applied analysis. Most existing taxonomies and classifications of digital platforms focus on goals, functions, or business models, while architectural aspects often remain insufficiently structured. This issue is particularly relevant for analytical digital platforms, which combine the functionality of traditional digital systems with machine learning methods, thus requiring a comprehensive systems-based approach to their description and design.
The aim of the study is to systematize and analyze the architectural components of digital platforms from the standpoint of various approaches to system analysis, as well as to design a prototype of the functional architecture of a digital analytical platform using the example of the agricultural sector. The research employs methods of systems analysis, taxonomic modeling, comparative typology, and architectural design synthesis using functional, structural, object-oriented, cybernetic, network-based, evolutionary, and ontological approaches.
The result is a generalized model of the architecture of an analytical digital platform, identifying its subsystems, elements, relationships, boundaries, environment, and identifiers according to each of the seven systems analysis approaches. As a practical example, the architecture of a prototype platform for analyzing the profitability of agricultural organizations is developed, implementing a pipeline for data processing, analysis, forecasting, and visualization.
The novelty of the study lies in the comprehensive application of all major systems analysis approaches to the description of analytical platform architectures and in the formalization of an architecture that integrates data levels, models, scenarios, and ontological entity descriptions.
The practical significance of the work is the potential use of the proposed architectural model in the design
of digital decision-support platforms in industries requiring advanced analytics.
ELECTRONICS, PHOTONICS, INSTRUMENTATION AND COMMUNICATIONS
Relevance. As sixth-generation (6G) wireless systems pursue extreme requirements in throughput, latency, reliability, and adaptability, the design of channel coding schemes becomes increasingly critical. This paper presents a comprehensive comparison between Low-Density Parity-Check (LDPC) codes and Polar codes, the two most promising channel coding candidates for 6G. We analyze their respective strengths across key metrics including data throughput, error-correction capability, decoding complexity, hardware implementation, and adaptability to dynamic communication scenarios. Furthermore, we explore recent advances in unified channel coding frameworks, including generalized LDPC with Polar-like components (GLDPC-PC) and artificial intelligence (AI)-assisted decoders, which aim to bridge the performance gap across diverse 6G scenarios.
Purpose. This paper aims to provide a systematic and measurable comparison of LDPC and Polar codes for 6G, while also examining the feasibility of unified coding frameworks to bridge their performance gaps.
Methods used. This study employs a systematic literature review. The analysis first evaluates LDPC and Polar codes against four key metrics: data throughput, error-correction capability, decoding complexity and hardware implementation, and flexibility. It then examines advancements in long- and short-block code design and unified frameworks. The comparison is substantiated by a quantitative analysis of documented performance data.
Results. LDPC codes demonstrate strong hardware scalability and parallelism, while Polar codes excel in short-packet error correction. Unified approaches integrate their advantages, enhancing adaptability to diverse scenarios.
Novelty. Unlike prior works with fragmented analyses, this study combines comparative evaluation with an exploration of unified frameworks, providing an integrated perspective.
Theoretical significance. The results enrich theoretical understanding of 6G coding trade-offs. The paper offers a guidance for researchers and standardization bodies in designing future coding strategies.
Practical significance. The practical significance of the work lies in the fact that the conducted comparative study of LDPC and Polar codes enables a well-founded selection of channel coding schemes for various 6G communication scenarios. The obtained results can be used in the design of 6G communication systems to optimize the choice between codes: Polar codes are suitable for short packets requiring low latency and high energy efficiency, while LDPC codes (particularly SC-LDPC) are ideal for long codes where hardware scalability and parallelism are critical. The results are also applicable to the development of unified decoders and adaptive systems capable of dynamically switching between schemes, which enhances the flexibility and efficiency of future telecommunication infrastructures.
In this paper, dependence of applied delay and video stream frame loss values of the FPV video stream on the size of the transmitted frames compressed by a neural network codec when implementing information exchange channels between unmanned aerial vehicles and an external pilot station in the space segment of a hybrid orbital-ground communication network. Satellite information exchange channels built on the basis of the Starlink Low-Earth orbit satellite constellations, as well as the Yamal-402 and Yamal-601 geostationary orbits, are considered. Relevance of the work is based on the necessity to achieve a specified level of quality of FPV control services in satellite communication networks.
Methods used. Application delay and frame loss of the video stream using neural network codecs are measured using the field testing method. Video stream frames are segmented, transmitted via the UDP transport protocol and reconstructed. The probability distribution density of delays is reconstructed using the Rosenblatt-Parzen method with a density estimation function with a Gaussian kernel.
Results. Average transmission delays and frame losses of a video stream (compressed by a neural network codec) via satellite communication systems in low-Earth and geostationary orbits are obtained. Distributions of video stream delay dependencies on the payload size are reconstructed. The nature of the distribution of video stream delays compressed by a neural network codec is found. Novelty of the obtained results lies in the study of the nature of video stream delays when implementing the FPV control service through various space segments of a hybrid orbital-ground communication network using neural network video stream compression codecs.
Practical significance. The results can be used in modeling applied satellite information exchange channels for implementing the FPV control service in order to form an optimal configuration of the neural network codecs used.
Relevance. Adaptive signal processing is a key technology in modern satellite systems. Its use significantly improves the efficiency of radio engineering systems by improving interference immunity and increasing the operating range. Iterative adaptation algorithms are used to implement spatial filtering in real time. An analysis of existing developments shows that the vast majority of solutions are based on least mean squares (LMS) and recursive least squares (RLS) algorithms. The popularity of these methods is due to their relative simplicity of implementation and optimal characteristics in a stationary electromagnetic environment. However, in a dynamically changing signal-to-noise environment, their effectiveness decreases sharply, and in these conditions, non-stationary algorithms based on the Kalman filter are used, among which the most well-known are the constant modulus algorithm based on the unbiased Kalman filter (UKF-CMA) and the minimum variance distortionless algorithm based on the extended Kalman filter (EKF-MVDR).
The aim of the study was to improve the signal-to-noise ratio by using adaptive signal processing algorithms in geostationary satellite communication systems.
The work used methods of mathematical modeling of adaptive spatial filtering algorithms for satellite communication channels in the MATLAB environment.
In the solution of solving the scientific problem, an analysis of the stability of both stationary algorithms (LMS and RLS) and non-stationary algorithms based on Kalman filtering (UKF-CMA, EKF-MVDR, UKF-MVDR) in a geostationary satellite communication system for various environments, such as urban, suburban, and rural areas. An analysis of computational complexity, convergence speed, and signal-to-noise ratio gain was also performed for the algorithms under study in stationary and non-stationary signal-to-noise conditions.
The scientific novelty of this work lies in proposing a modification of the EKF-MVDR algorithm based on an unbiased Kalman filter (UKF-MVDR) to improve the stability of the algorithm in non-stationary signal-to-noise conditions as applied to adaptive signal processing tasks.
The theoretical significance of this work lies in the use of spatial signal processing algorithms in geostationary satellite communication systems to ensure stable operation in stationary and dynamic signal-to-noise environments.
Effective radio resource scheduling at the Medium Access Control (MAC) layer is a critically important task for ensuring quality of service in mobile networks. The use of machine learning and artificial intelligence for MAC-layer scheduling is becoming a promising direction. Existing general-purpose simulators (MATLAB, ns-3, OMNeT++) are insufficiently optimized for in-depth research ofl resource scheduling algorithms and have limitations in their integration.
The purpose of this article is to develop a specialized simulation model for LTE (Long Term Evolution) network resource scheduling at the MAC layer for investigating both classical and intelligent scheduling algorithms.
The core of the proposed solution lies in creating a modular simulation model that incorporates different user mobility models, radio propagation models, traffic generation models, and classical scheduling algorithms (Round Robin, Proportional Fair, Best CQI). The model specializes in detailed simulation of MAC-layer processes. The system is implemented in Python with modular architecture enabling integration of machine learning and artificial intelligence-based algorithms. The source code is hosted in an open GitHub repository.
Experiments were conducted for an infinite buffer simulation scenario with three users from different mobility classes in an urban environment. Three classical scheduling algorithms were tested with evaluation of throughput, Jain's fairness index, and spectral efficiency.
The scientific novelty of the solution lies in creating a specialized simulation model optimized for investigating MAC-layer scheduling algorithms with the capability to integrate machine learning methods and providing flexibility in configuring various simulation scenarios.
The theoretical significance consists in expanding the toolkit for studying mobile network resource scheduling algorithms and establishing a foundation for developing intelligent schedulers.
The practical significance is providing researchers with a specialized tool for developing, testing, and comparing scheduling algorithms, as well as the ability to adapt the model for 5G/6G networks and integrate quality-of-service-aware schedulers.
Beamforming technologies in 5G / 6G and fractional lambda switching are impossible without fast (in times <1 ns) packet switching. Existing microresonator devices and the like are focused on low-photon signals and are not effective on traditional G.703/G.802.3ba fiber-optic lines. Therefore, methods and devices for fast switching of optical packets are relevant.
The purpose of the work: to create a new non-relational method for fast switching of signals / packets in fully
optical networks based on chirp pulses. The scientific task is to develop a multi-port interference wavelength separation device with a small step.
Methods used: numerical modeling in the HFSS package, methods of probability theory.
In the course of solving the scientific problem, an interference pattern was obtained in the working area of the device, a spectrally selective output mirror was designed, and the refractive index gradient was refined.
Novelty: a method of fast optical switching, a two-resonator separation device with a developed output mirror structure and a refined refractive index is proposed.
Practical significance: the device is designed for packet 5G / 6G networks without buffering.
The results of the work are interesting when designing new generations of optical switches.
The practical implementation of the device improves the performance of packet-switched networks.
Relevance. The increase in the number of terminals and the intensity of connections in satellite communication networks with the «star» topology actualizes the problem of choosing an effective mechanism for accessing a common radio channel. The well-known approaches of deterministic and random access have significant limitations. At the same time, there are no clear analytical criteria for choosing between mechanisms depending on the load, which makes it difficult to optimize network performance.
Purpose (research): The aim is to compare the effectiveness of two mechanisms for entering satellite terminals into a network with the "star" topology: with specific slot access and with random access. The assessment is aimed at identifying conditions under which one of the mechanisms is superior to the other in key performance indicators.
Methods. The solution of the problem is based on a combination of analytical and simulation modeling. To evaluate the effectiveness of random access, a strict combinatorial derivation of the mathematical expectation formula for the number of slots selected by exactly one terminal was carried out. The verification of the analytical model was performed using stochastic modeling in Python.
Result. A validated analytical model has been obtained that makes it possible to accurately predict the effectiveness of the random access mechanism. The data obtained is applicable in the design of satellite communication networks to optimize terminal entry time and channel resource allocation. The novelty elements are rigorous analytical inference and verification of the formula for the mathematical expectation of the number of successfully occupied slots with random access, which allows you to accurately predict performance without large-scale modeling. The novelty also includes the establishment of a quantitative criterion for choosing an access mechanism. The proposed model takes into account the real conditions of terminal competition for channel resources and is applicable to the analysis of protocols such as ALOHA and TDMA.
Practical significance. The presented solution is proposed to be used in the design and adaptive management of the MAC layer in VSAT satellite networks, IoT systems and telemetry networks. The obtained criteria for selecting an access mechanism can be implemented as dynamic reconfiguration algorithms in software-configurable networks, allowing automatic switching between modes depending on the current load. This will ensure optimal use of bandwidth, minimize delays, and increase overall network stability.
This article discusses the concept of the Computing Power Network (CPN), a new paradigm of distributed computing designed to distribute, manage and optimally use computing resources on demand by users, similar to the distribution of electrical energy in power systems.
The relevance of the study is due to the fact that with the development of the digital society, more and more applications require not only high computing power, but also low latency, which makes computing and communication networks tightly integrated. In contrast to cloud, edge and fog computing technologies, a new paradigm for organizing geographically distributed computing is required that can provide more flexible, efficient and high-quality provision of computing power on demand by users to support a variety of promising applications (artificial intelligence / machine learning, big data analysis, industrial Internet of Things, smart manufacturing, unmanned transport, etc.). By analogy with the distribution of electrical energy in power systems, a new model for distributing computing resources was recently proposed - CPN. It provides computing power as "computing energy" that can be transmitted, accumulated and consumed in a distributed network of nodes - similar to how electrical energy is distributed between generators, substations and consumers in power grids.
The aim of this study is to study the architectural and functional features of computing power networks, as well as to analyze the current state of international standardization of this technology.
Methods include analysis of scientific and regulatory literature, assessment of the state of the level of international standardization of computing power network technologies.
Results. The study analyzed the general principles of construction, structure and functional architecture of the computing power network, and determined that the full functioning of CPN requires a developed network infrastructure, primarily based on software-defined network technologies SDN and network management platforms using artificial intelligence.
Scientific novelty. The study is the first attempt to conduct a system analysis of the computing power network concept in the context of Russian-language scientific literature. The work fills the existing gap in domestic science, offering a comprehensive view of the possibilities of building and operating a network of computing power using technologies of existing and prospective communication networks.
The theoretical significance of the work lies in creating a basis for studying and integrating prospective fixed and mobile 5G / 6G communication networks with cloud and edge computing to implement the concept of a network of computing power.
ISSN 2712-8830 (Online)

























