ELECTRONICS, PHOTONICS, INSTRUMENTATION AND COMMUNICATIONS
Relevance. The direct spread spectrum signals are widely used in navigation and communication systems recently. These signals prevail in modern satellite navigation systems and are used in various communication systems with code division multiplexing in particularly. In this regard, the tasks of building direct spread spectrum signals’ demodulators have the key importance. Mach importance in the construction of demodulators is the problem chip rate variability.
The purpose of the study is to propose a demodulator structure focused on solving this problem.
Methods. The research is based on computer modeling methods.
Decision. The paper proposes an approach to the construction of the direct spread spectrum signal’s demodulators based on modern methods of digital signal processing. It is shown that the main advantage of the proposed approach is the possibility of rebuilding the variable chip rate demodulators. Based on the results obtained, a scheme for the direct spread spectrum signals demodulator using resampling methods is proposed. Resampling, in turn, is implemented on the basis of polynomial interpolation using Lagrange polynomials. The structure of the resampler is proposed, similar to the structure of an interpolating filter with a finite impulse response. The presented simulation results show the effectiveness of the proposed approach.
Novelty. It seems that the currently common methods of implementing direct spread spectrum signal in terms of delay synchronization do not sufficiently meet modern requirements. The implementation of delay synchronization schemes based on resampling is practically not discussed in well-known works. At the same time, modern methods and devices of digital signal processing make it possible to ensure an effective hardware implementation of the scheme in question. In this context, the approach proposed in the paper to the construction of demodulators seems to be very relevant.
Significance. The results of the work can be used in the construction with direct spread spectrum signals’ demodulators for a wide range of communication and navigation systems. The synchronous sampling structure proposed in this paper is very promising, especially for variable chip rate demodulators.
The article is devoted to assessing the possibility of forming an information leakage channel from an optical fiber defect created by thermal exposure. The properties of optical fiber inhomogeneities caused by such exposure have not been practically studied to date, which determines the relevance of research. Taking into account the above, the purpose of the study is to determine the characteristics of optical fiber inhomogeneities caused by thermal exposure.
The methods used. The paper calculates the radiation power losses introduced by a defect caused by thermal action at high temperature, as well as the radiation power removed from the defect beyond the optical fiber. During the studies, the characteristics of optical fiber inhomogeneities caused by thermal action were also estimated using reflectograms.
The result. The work that local temperature exposure makes it possible to form a defect in an optical fiber that allows part of the optical radiation to be emitted beyond the fiber, i.e. to create a channel for unauthorized data retrieval. The magnitude of the insertion loss of radiation power on the created defect increased with increasing time of thermal exposure to the optical fiber. When the time of thermal exposure to the optical fiber was less than 1 s, it was not possible to form a defect with significant insertion loss of radiation power, and when the time of thermal exposure was more than 10 s, the insertion loss on the defect exceeded 20 dB, at which data transmission of zonal and trunk fiber-optic communication lines ceases. It is shown that with increasing wavelength of optical radiation propagating along the fiber, the loss of radiation power on the defect formed by thermal exposure to the optical fiber increases. It has been established that with the same power loss on a defect formed by thermal action, the optical radiation power removed from such a defect has the greatest value when using G652 optical fiber, and the least when using G657 fiber.
The scientific novelty of the work consists in the study of previously unexplored properties of optical fiber inhomogeneities caused by thermal exposure.
Practical Significance. The results presented in the article can be used in the design of information protection systems transmitted over fiber-optic communication lines.
Actuality. Research of ring resonating structures is interesting to developers of microwave devices; some features of ring elliptical resonators lead to the emergence of unique properties of the transmission characteristics of the device. Method of excitation for CER is particularly important. It is possible to obtain the traveling wave mode in these structures, under certain conditions. To the synthesis of microwave devices, as well as to the study of methods for exciting and evaluating the mode of the wave process in the structure this paper is devoted.
Object. The purpose of the study is to analyze and organize information about the development of microwave devices using circular elliptical resonators (CERs). Authors also want to test the results of using dual CERs.
Methods. The authors have conducted an analytical review of recent scientific publications and performed computer modeling of microstrip ring elliptical resonators that operate in the ultrahigh frequency range in this work. The paper also includes the results of our experiments, tested by various researchers, including those supported by grants from the Russian Foundation for Basic Research.
Result. The article explores the unique characteristics of loop strip filters and highlights the limitations of using strip resonators. It that describes the design of a ring elliptical resonator (CER), and suggests its potential as an alternative to microstrip resonators. The paper presents the results of numerous experiments on the development of microwave devices based on CERs. The results of numerous experiments of the synthesis of microwave devices based on СER and including: single; double resonators; preselective filters; amplifiers and generators based on a ring and active bipolar are presented. Additionally, the issue of connecting the resonator to the main transmission line is addressed. The results of modeling several devices that limit the direction of propagation of an electromagnetic wave in an annular resonator are presented.
Scientific novelty. This article introduces a new design of a double elliptical resonator based on a microstrip line. It also describes the results of an experiment that shows that the resonator topology can achieve a filter rejection level of over 70 dB. In the article, the authors also discuss the problem of selecting a power supply method and ensuring the wave propagation mode in the CER.
Practical significance. The results obtained in the course of this work can be used to create a traveling wave resonator on a microstrip line or in other planar or volumetric configurations. The results of the study also serve as the basis for the creation of a generalized theory of synthesis of ring resonators in the microwave wavelength range.
Relevance. Quite low power of the global satellite navigation systems’ useful informational signals near the Earth surface along with an ongoing noticeable increase of the number of easily available and efficient portable means of blocking wideband energetic interference radiation make the problem of radionavigational satellite devices antijamming capabilities improvement especially relevant both from practical and scientific points of view. Therefore, the goal of this research was to increase the antijamming capabilities of the global satellite navigation systems via processing of the corresponding receiving apparatus’ input signals by special spatial filters. To achieve the work goal the scientific task of researching on the antijamming capability improvement in radionavigational devices by means of space-frequency signal processing was solved.
The methods used. During the research, different spatial signal processing algorithms were considered, among them both the ones functioning without any information about interference situation, external with respect to the receiving radionavigational system, and the ones using the knowledge about the number and relative disposition of the jamming sources. Additionally different methods of interference sources number and angular directions finding were studied, as well as modern cost function optimization algorithms which are used for signal sources’ location determination.
Scientific novelty of this work consists of usage of new algorithms that implement separate signal processing stages and that provide necessary information to the filtering algorithms during the problem solution, as well as of combining known methods with new approaches to their design.
The results. During the scientific task solution, the performance quality metrics comparison was carried out for all the considered algorithms via the computer modeling method that employed recordings of real satellite navigational signals with addition of varying number of uncorrelated energetic interferences sources. As a result of modeling, the performance quality measure values were obtained for all the investigated algorithms and the comparative analysis thereof was conducted, at the end whereof the methods with the best characteristics were picked out.
The significance of the work results consists of possibility of using the considered algorithms in real antijamming satellite navigation devices design.
INFORMATION TECHNOLOGIES AND TELECOMMUNICATION
Relevance. The use of digital electrocardiographs and cardiac monitors with built-in algorithms for automatic processing, analysis and interpretation of electrocardiograms allows the doctor to effectively diagnose cardiac arrhythmias. It is known that in order to provide emergency care to a patient, the duration of arrhythmia diagnostics should not exceed several tens of seconds, which requires the emergence of new algorithms for detecting informative features indicating arrhythmia, operating in real time. The need to introduce new and effective technologies for diagnosing cardiovascular diseases is also reflected in public health development programmes.
Research goal. Development and quality indicators analysis of the algorithm for reference points detection on a digital electrocardiogram, bearing informative signs for the procedure of arteries diagnosis.
The methods used. The study is based on an analysis of existing approaches to the problem of reference points detection on digital electrocardiogram, as well as conducting a test of the proposed algorithms by mathematical modelling methods. The quality indicators of the algorithms defined in accordance with the principles of signal detection theory and diagnostic testing, at the junction of which the task of electrocardiogram reference point detection is located. The proposed algorithm was tested on materials of MIT-BIH Arrhythmia Database, which is widely used for verification and validation of real-time digital electrocardiogram signal processing algorithms.
The results. The study proposes an algorithm for detecting reference points on a digital electrocardiogram that carry informative features for the arrhythmia diagnostic procedure. The proposed algorithm is based on digital signal filtering using a decision rule based on a three-step two-threshold principle of pre-processed electrocardiogram signal values comparison on a sliding window. An experiment on the materials of the open verified MIT-BIH Arrhythmia DB showed that the quality of the proposed algorithm for detecting reference points is higher than that of the algorithms used in modern digital electrocardiographs and cardiac monitors. The proposed algorithm based on digital signal filtering and the three-step two-threshold decision rule have elements of scientific novelty.
The significance. The results of this work can be used in the development of digital heart rate monitors, cardiac devices and for automatic processing, analysis and real-time computer-assisted digital electrocardiogram signal interpretation.
Relevance. Nowadays, technological systems, artificial intelligence, the general availability of the Internet and penetration into the systems of banks, institutions and social networks have become a studied science and are accessible to all groups and ages. One of the main tasks was to provide a system for protecting confidential information from hackers, as well as easy access to authentication and identification of users. Biometric systems came to the fore, including mouse movement dynamics and keystroke dynamics, which reveal the typing style and mouse movement of each person. Soft biometrics is an interesting and inexpensive biometric method that does not require additional equipment. The system identifies a person based on the input information they enter in a special column. Hand identification dynamics falls into the category of behavioral soft biometrics, that is, the user's patterns reflect the individual program of actions that he follows when using the site.
The goal of this article the purpose of this work is to improve the security level by creating a function that will strengthen the authentication system and improve the iron gate
Методы исследования. In carrying out the work, methods of analysis and synthesis, theories of algorithms, laws of kinematics, neural networks, keystroke dynamics and soft biometrics were used.
Results. A method for extracting dynamic characteristics of keystrokes is described. A neural network is created and a threshold value is determined for identifying the type of typing hand.
Scientific novelty. Unlike known authentication methods, the proposed method is used to determine the typing hand on the keyboard through a neural network using the laws of kinematics, soft biometrics and extracting the dynamics of keystrokes in order to determine the value and accuracy of determining the type of typing hand.
Significance. The proposed solution allows to increase the security of user authentication, increase the speed of implementation and reduce the cost. The results obtained in the work are positive and can be used in the near future. In turn, soft biometric measurements depend on human behavioral patterns, which complicates user falsification. It is difficult to imitate typing behavior, since it is ballistic (semi-autonomous), which makes behavioral information valuable as a soft and sensitive biometric method.
Relevance. One of the problems that must be solved when creating RFID systems is the reader's multiple access to a group of tags located in a limited space, since the reading signal causes a one-time response of many tags, which leads to collisions (conflicts) of response signals. This problem has not been solved in relation to passive tags without a chip, based on surface acoustic wave technologies, the code of which is laid down during manufacture and cannot be changed during operation.
The purpose of the study is to develop algorithms that allow synthesizing such groups of codes that would provide a controlled level of pairwise correlation of the selected label signals and thereby ensure the specified accuracy of label identification. The proposed algorithms are based on the procedures of code concatenation and inductive construction of groups of codes with a given volume and correlation level. For the algorithm for forming a group of codes with the required value of the correlation coefficient and the algorithm for combining groups of codes into complete and maximum groups, properties have been proven that confirm the possibility of using them to formulate tasks for preparing groups of labels on surfactants that would correspond to the number of objects requiring identification and the accuracy of their identification and taking into account the number of labels in the group, the conditions for the propagation of radio signals in the area of operation of the reader, the number of repeated readings of the label codes, as well as algorithms for joint data processing, received with all calculations.
The methods used. Methods of coding theory and correlation analysis.
Result. The developed algorithm is a tool for creating modern coding systems for surfactant labels.
The scientific novelty. Well-known algorithms for multiple access in RFID systems are proposed in the GEN1 and GEN2 EPC Global standards, and assume that the tag has a chip and a power supply, which makes it possible to implement protocols for influencing the tag with a reader using special commands. The proposed multiple access algorithm is applicable for passive surfactant tags, including those moving at high speed and/or located in aggressive environments, since the tags do not use silicon technology compared to active RFID tags.
Practical significance. The use of the proposed set of algorithms will increase the efficiency of marking systems by reducing the identification time of objects located in a confined space.
Actuality. The current paper is the second part of the paper “Advance in Applied Cryptography Theory: Survey and Some New Results. Part 1. Key Cryptography” published in the journal PTU, n.4, 2024. It is devoted to such specific area of applied cryptography as keyless one (KC\ Actuality of the current paper consists in the fact that considered in it methods allow to provide a confidentiality of information transmission over public communication channels, either without any its encryption in advance, executing a natural properties of communication channels or executing conventional key cryptography but with the keys which are elaborated before by means of KC.
The natural properties of communication channels can be the following: additive noise, multiray wave propagation, MIMO technology and existence of feedback channel.
Our paper starts with a consideration of Wyner’s concept of wire-tap channels and corresponding to it encoding and decoding methods providing very reliable information transmission over the main channels and negligible amount of information leaking over the wire-tap channels to eavesdroppers. Next it is investigated scenario with a commutative encryption (CE) and corresponding protocol of message exchange over ordinary noiseless public channel that provides security of encrypted information but without any key exchange between users in advance. It is proved which of well known symmetric and asymmetric ciphers are commutative or non-commutative ones. Next model concerns a fading channels under the application of Dean-Goldsmith protocol in frames of MIMO technology. We are proving that this protocol is secure if, and only if, the number of eavesdropper antennas is less than the number of antennas at legitimate users. Next scenario executes variable directional antennas (VDA) and it is proved for which conditions on a locations of legitimate users and eavesdroppers such approach occurs secure given the number of propagation rays is at least two.. We show in the next chapter that there is an attack compromising of recently proposed EVESkey cryptosystem and hence such one is not secure in spite of the statement of its authors.
Finally, we investigate several protocols intended for key sharing over noiseless constant public channels (like Internet) and established that they are mostly insecure because have all zero secret capacity. Only one protocol based on matrix channel exchange is able to provide security of key sharing but in terms of the required breaking complexity. Thus such approach can be used only for the case when legitimate users belong to low level of security requirements.
At the end of the paper we formulate several fundamental problems of applied cryptography which after of their solutions could be very useful for practice.
Relevance. Currently, the interfaces quality often plays a decisive role in solving problems by a person using information services. To evaluate interfaces, the efficiency concept was previously introduced, consisting of the following indicators: effectiveness ‒ a conditional errors number measure when working with an information system; efficiency ‒ the speed of the user's work with the information system to obtain the desired result; 3) resource efficiency ‒ the degree of user psycho-emotional stress when entering and processing data. Nevertheless, the previously obtained model requires not only the mathematical apparatus usage to determine such indicators, but also knowledge of the graphic elements atomic (i.e. individual or isolated) efficiencies associated with the features of the user's interaction with them.
The article purpose is to improve the efficiency of information service interfaces, which requires calculating the individual graphic elements atomic efficiencies.
The proposed solution essence is the visual system for statistical measurement of the atomic efficiency for 6 graphic elements (text field, drop-down list, classic and checkbox button, bidirectional counter and "slider") based on the performing various tasks results by users with their help. For example, the text field efficiency compared to a drop-down list will be higher for short words ‒ since their input from the keyboard is faster than selecting from the list, but lower for long sentences ‒ since in this case it is faster for a person to select the desired one than to provide correct input. The measuring principle the efficiency by the proposed system is based on the sequential output of graphic forms with different types (and in some cases, quantities) of elements, indicating the task to the user and measuring the correctness and data entry duration. To reduce subjectivity in actions, various techniques are used, such as different duration timers. A special survey at the end of the element test groups is used to assess the psycho-emotional load. The system has an implementation as a Web site in PHP, individual Web pages of which and their interpretation are given in the article. Experiments with the use of this system for 50 users allowed us to obtain the desired all elements atomic efficiencies.
The scientific novelty of the solution lies in the obtaining possibility estimates of the interface elements efficiency in a completely formal way, taking into account only the features of user interaction with them, as well as the data specifics (size, type).
The theoretical significance lies in expanding the class of methods for assessing the graphical interfaces efficiency obtained through assessing the elements that make it up.
The practical significance lies in the possibility of directly using the obtained atomic efficiencies graphs to compare interfaces and optimize them.
The problem of reducing the dimension of the initial data arrays to improve the efficiency of mobile application traffic processing is considered. The relevance of the study is due to the need to optimize the volume of transmitted and stored data when working in conditions of limited computing resources, as well as to increase the speed and quality of analytical operations. To solve this problem, multi-layer autoencoders are used, capable of forming compact representations of the source data with minimal losses in their informativeness. The approach is based on the idea of training neural network models that extract the most significant features from the source arrays and are able to restore them with a given level of accuracy. Methods used. During the experiments, various architectures of multilayer autocoders were used, differing in the number of layers and dimensions of hidden representations. The research was conducted on real data sets collected from mobile applications with a wide range of functionality. The analysis was carried out by varying the internal parameters of the networks and evaluating the results through an integral statistical indicator reflecting the degree of compression. This indicator allows you to identify how much the spread of attributes changes when passing data through the autoencoder.
Results. To evaluate the filtering properties of multilayer autoencoders, an integral compression indicator is proposed that characterizes the change in the spread of attributes of mobile applications when passing them through an autoencoder of a given structure. The indicator is calculated as the ratio of the standard deviation of the attributes at the input and at the output, which allows you to assess the degree of data compression and the degree of information preservation after processing. It is shown that an increase in the integral compression index indicates a more significant compression of the initial data. It was found that filtering is practically independent of the type of application and lies within 10-20 % for three-layer autoencoders, whereas for five-layer auto-encoders, preference is given to encoders with a minimum dimension of the inner layer. The main novelty of the work lies in the development of an integral statistical indicator that not only reflects the degree of compression of mobile application data, but also takes into account the preservation of the original information structure. Unlike existing approaches, this indicator allows for a systematic comparison of various architectures of autoencoders, taking into account not only the reduction in dimension, but also the quality of recovery of the original information. This creates the basis for a more objective assessment of the effectiveness of multilayer autoencoders in specific application conditions.
Practical significance. The proposed methodology may be useful for developers and researchers working on optimizing systems for collecting, storing and processing mobile application data. In conditions of limited computing resources, which are typical for mobile devices and embedded systems, the use of multilayer autoencoders aimed at achieving a given balance between compression and preservation of information provides a significant reduction in the volume of transmitted data. The results of the study can be implemented into existing analytical platforms, monitoring systems and classification of mobile applications.
ISSN 2712-8830 (Online)