--> -->

  • <tr id='7KsGyN'><strong id='7KsGyN'></strong><small id='7KsGyN'></small><button id='7KsGyN'></button><li id='7KsGyN'><noscript id='7KsGyN'><big id='7KsGyN'></big><dt id='7KsGyN'></dt></noscript></li></tr><ol id='7KsGyN'><option id='7KsGyN'><table id='7KsGyN'><blockquote id='7KsGyN'><tbody id='7KsGyN'></tbody></blockquote></table></option></ol><u id='7KsGyN'></u><kbd id='7KsGyN'><kbd id='7KsGyN'></kbd></kbd>

    <code id='7KsGyN'><strong id='7KsGyN'></strong></code>

    <fieldset id='7KsGyN'></fieldset>
          <span id='7KsGyN'></span>

              <ins id='7KsGyN'></ins>
              <acronym id='7KsGyN'><em id='7KsGyN'></em><td id='7KsGyN'><div id='7KsGyN'></div></td></acronym><address id='7KsGyN'><big id='7KsGyN'><big id='7KsGyN'></big><legend id='7KsGyN'></legend></big></address>

              <i id='7KsGyN'><div id='7KsGyN'><ins id='7KsGyN'></ins></div></i>
              <i id='7KsGyN'></i>
            1. <dl id='7KsGyN'></dl>
              1. <blockquote id='7KsGyN'><q id='7KsGyN'><noscript id='7KsGyN'></noscript><dt id='7KsGyN'></dt></q></blockquote><noframes id='7KsGyN'><i id='7KsGyN'></i>
                • 北大核心期←刊(《中文核心期刊要目▅总览》来源期刊)
                • 中国科技╱核心期刊(中国科技论文统计源期刊)
                • JST 日本科学技术振兴机构数据库(日)收录期刊


                Special TopicMore >

                Download CenterMore >


                Articles just accepted have been peer-reviewed and accepted, which are not yet assigned to volumes /issues, but are citable by Digital Object Identifier (DOI).
                Display Method:
                Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes /issues, but are citable by Digital Object Identifier (DOI).
                Display Method:
                Display Method:
                Overview of quantum computing simulation platforms
                WEI Lu, MA Zhong, LIU Qianyu
                2022, 39(11): 1-10.   doi: 10.19304/J.ISSN1000-7180.2021.0309
                Abstract(0) HTML(0) PDF (0)

                Quantum computing simulation platforms running on traditional computers have the quantum computing functions, which is an effective way to promote the development of quantum computing software, algorithms and hardware at the current immature stage of the real quantum computer. Quantum computing simulation platforms are classified for different users, including quantum cloud service simulation platforms, locally running quantum software platforms, and cloud platforms that have quantum computer support. Several typical platforms are analyzed for each kind of quantum computing simulation platforms, the characteristics and the development tendency of quantum computing simulation platforms are summarized as well as the problems to be solved. Furthermore, suggestions for the selection of quantum computing platforms are provided, which can be the guidance for researchers who are interested in quantum computing and can be helpful for develop quantum computing applications.

                Improved firefly algorithm optimizes twin support vector machine parameters
                GU Jiaxin, HE Xingshi, YANG Xinshe
                2022, 39(11): 11-18.   doi: 10.19304/J.ISSN1000-7180.2022.0230
                Abstract(0) HTML(0) PDF (0)

                In view of the problems of the original Firefly Algorithm (FA), which is easy to fall into local optimization, low solution accuracy and difficult parameter selection of twin support vector machine (TWSVM), a dual support vector machine model (DEFA-TWSVM) based on improved firefly algorithm (DEFA) is proposed. Firstly, the original firefly algorithm is improved to obtain DEFA algorithm. In the firefly position update formula, dynamic inertia weight was combined, and the step size control factor was adjusted adaptively to quickly search for global and local optimal solutions. Differential Evolution (DE) strategy was applied to the firefly population after each movement to ensure the iterative diversity of the population. The simulation results of benchmark test function show that the improved algorithm has strong global optimization ability and is not easy to fall into local optimization. Secondly, DEFA algorithm was used to optimize the parameters of TWSVM. Finally, the classification accuracy of DEFA-TWSVM and other models is obtained by testing in UCI data set. By comparison, it is found that DEFA algorithm can automatically determine TWSVM parameters in the training process, which solves the problem of blind TWSVM parameter selection, and the average classification accuracy is increased by 2 to 5 percentage points compared with other models.

                Task offloading optimization algorithm based on Lyapunov optimization
                MA Lili, ZHANG Wendong, LI Zhiwei, MO Wanghao
                2022, 39(11): 19-26.   doi: 10.19304/J.ISSN1000-7180.2022.0242
                Abstract(0) HTML(0) PDF (0)

                Aiming at the optimization problem of average delay and energy consumption caused by the dynamic arrival of tasks and the uncertainty of channel conditions in the process of task unloading in edge computing environment, a task offload optimization algorithm based on Lyapunov optimization is proposed. Firstly, the original problem is transformed into a deterministic optimization problem by using Lyapunov optimization method, and the task queue to be unloaded in the terminal equipment is transmitted to the cache base station according to priority. The queue length in the process of task unloading is constrained and modeled, which make queue length controllable to ensure system stability. Secondly, combined with the positive feedback mechanism of genetic algorithm, fast convergence and the advantages of fast global search ability and high precision efficiency of ant colony algorithm, the approximate optimal unloading path is found for the actual state of the cache base station with priority constraint queue, so as to the task can be offloaded to the appropriate mobile edge computing server efficiently. Finally, according to the constraints of the task queue and the optimization results of the offloading path, a heuristic global optimization task offloading algorithm is proposed. Compared with the existing EEDOA research methods through experimental simulation, the proposed algorithm can effectively reduce the unloading energy consumption of tasks by reasonably constraining the queue length and choosing the approximate optimal path.

                Application research of CT image enhancement based on improved SRGAN network
                ZHANG Yue, ZHAO Zhe, ZHAO Guohua, WU Qingxia, LIN Yusong
                2022, 39(11): 27-36.   doi: 10.19304/J.ISSN1000-7180.2022.0055
                Abstract(0) HTML(0) PDF (0)

                At present, intervertebral disc diseases can be diagnosed by observing CT images or MRI images. Compared with MRI images, CT images have low cost and fast film forming speed, but there are some problems, such as low contrast, fuzzy focus area of intervertebral disc, unclear edge and so on. To solve the above problems, an improved CT image enhancement method based on SRGAN network is proposed. In this method, the adaptive segmentation and fusion method is used for image preprocessing, the BN layer is removed from the SRGAN generator, and the attention mechanism is introduced to make each residual block generate the feature map to obtain the corresponding weight. At the same time, the boundary loss function is added to make the reconstructed lesion area clearer and the edge more obvious. This method is tested on the real head and neck CT images and MRI images provided by Henan people's hospital. The classical image enhancement algorithm is compared with the latest image enhancement algorithm to objectively evaluate the enhanced CT images. At the same time, two clinicians subjectively evaluate the enhanced CT images through the 5-point image quality evaluation standard. The results show that this method significantly improves the SSIM, PSNR, information entropy, edge intensity and average gradient of CT image, makes the focus area of CT image clearer and the edge more obvious, which is convenient for doctors to read and diagnose, and has strong application value.

                Research on grasp detection method based on angle constraint and gaussian quality map
                WANG Wenjun, HAN Huiyan, GUO Lei, HAN Xie, LI Yufeng, WU Weizhou
                2022, 39(11): 37-44.   doi: 10.19304/J.ISSN1000-7180.2022.0171
                Abstract(0) HTML(0) PDF (0)

                To solve the problem of unstable selection of optimal grasping point and inaccurate grasping angle in dynamic grasping environment, a grasp detection method based on angle constraint and gaussian quality map was proposed. Firstly, the grasping angle were divided into several categories according to the angle value, and the angle value range within the category was constrained to solve the pixel-level annotation loss caused by intensive annotation. morphological open operation method was used to filter the debris generated by multiple annotation stacking in the angle map, and the grasping angle map with stronger annotation consistency was obtained. Secondly, gaussian function was used to optimize the grasping quality map to highlight the importance of the center position of the grasping region and improve the stability of the selection of the optimal grasping point. Finally, based on the fully convolutional network, grasping point and grasping direction attention mechanisms are introduced, and an Attentive Generative Grasping Detection Network (AGGDN) is proposed. Experimental results on Jacquard simulation dataset show that the detection accuracy of proposed method achieves 94.4%, and the single detection time is 11ms, which can effectively improve the grasping detecting ability of complex objects, and has good real-time performance. The experimental results of grasping irregular targets with different poses in the real environment show that, the grasping success rate of proposed method can reach 88.8%, and it has strong generalization ability to the new targets that never appear in the training set, and can be applied to the relevant tasks of robot grasping.

                Visual gaze target tracking method based on spatiotemporal attention mechanism and joint attention
                WANG Zhijie, REN Jian, LIAO Lei
                2022, 39(11): 45-53.   doi: 10.19304/J.ISSN1000-7180.2022.0148
                Abstract(0) HTML(0) PDF (0)

                The current visual tracking technology tends to ignore the connection between the figure and the scene graph, as well as the lack of analysis and detection of joint attention, which results in unsatisfactory detection performance. In response to these problems, this paper proposed a visual gaze target tracking method based on spatiotemporal attention mechanism and joint attention. For any given image, the method extracts the head features of a person by using a deep neural network, and then adds extra-interaction between the scene and the head to enhance the saliency of images. Lots of interference information on the depth and field of view can be filtered out by the enhanced attention module. In addition, the attention of the remaining characters in the scene is considered into the area of interest to improve the standard saliency model. After adding the spatiotemporal attention mechanism, the candidate target, target gaze direction and time frame number constraints can be effectively combined to identify the shared location, and the saliency information can be used to detect and locate joint attention better. Finally, the image is visualized as a heat map. Experiments show that the model can effectively infer dynamic attention and joint attention in videos with good results.

                Research on path dynamic redundancy strategy in Time Sensitive Network
                HU Feng, LIU Zexiang, XU Danni
                2022, 39(11): 54-61.   doi: 10.19304/J.ISSN1000-7180.2021.1314
                Abstract(0) HTML(0) PDF (0)

                Focusing on the high reliability requirements of Time Sensitive Networks, the research on path dynamic redundancy technology is carried out to address the problem that the existing redundancy technology cannot provide high reliability guarantee for discontinuous multiple single node failures.Based on the existence of two disjoint redundant paths in the network for seamless transmission, a path dynamic redundancystrategyis designed on the basis of IEEE 802.1CB protocol and IEEE 802.1Qcc protocol.Firstly, the dynamic variable length sliding window VariableVectoryRecovery algorithm is improved in the sequence recovery function for frame replication, node failure detection, and reported to the centralized network configuration management node.Secondly, a multi-objective optimized BackupReroute model is proposed to repair thepath in terms of delay and bandwidth, and solved by genetic algorithm to ensure network reliability while minimizing network load and dynamically reorganizing the redundant system. The theoretical analysis proves that this path dynamic redundancy strategy can provide high reliability guarantee for discontinuous multi-node failures.Comparing with other routing algorithms, the BackupReroute model overall performance is better.Finally, the simulation experiment of VariableVectoryRecovery algorithm is conducted on OMNeT++, and the results prove that it can effectively reduce the packet loss rate.

                A strong real-time traffic scheduling and automatic configuration method for TSN based on NeSTiNg
                WANG Bo, GAO Wenwei, XU Danni, HE Xinle
                2022, 39(11): 62-68.   doi: 10.19304/J.ISSN1000-7180.2022.0218
                Abstract(0) HTML(0) PDF (0)

                Traffic scheduling is the core of Time Sensitive Networking (TSN) technology. In large-scale industrial embedded system applications, the design and simulation stages of TSN traffic scheduling require complex and demanding configuration of scheduling information such as links, flows, and gating. Existing configuration methods based on simulation frameworks mostly use manual configuration, which hinders the applicability of these frameworks to large-scale TSN networks. Based on the strong real-time traffic transmission mechanism of TSN and the NeSTiNg (Network Simulator for Time-Sensitive Networking) framework, this paper proposes a strong real-time traffic scheduling and automatic configuration method suitable for large networks. First, this method is different from the traditional manual configuration input method, and realizes the automatic configuration of information such as flow offsets, gate status and routing tables; secondly, the method extracts the end-to-end delay of the flow, which verifies the strong real-time performance of TSN transmission in large networks; finally, the applicability of the method to dynamic topology is evaluated by calculating the computing time of selecting a new path after topology changes. The results show that this method guarantees strong real-time transmission of large numbers of flows in a large network, and when the topology changes, it has a lower computing time than static routing.

                Text detection method based on text enhancement and multi-branch convolution
                TU Chengli, CHEN Zhangjin, QIAO Dong
                2022, 39(11): 69-77.   doi: 10.19304/J.ISSN1000-7180.2022.0239
                Abstract(0) HTML(0) PDF (0)

                Because text detection technology in natural scenes is the premise of many industrial applications and the accuracy of common detection methods is not good, this paper proposes a neural network method based on text enhancement and multi-branch convolution to detect the picture text in natural scenes. Firstly, this paper adds the network structure of text area reinforcement in front of the backbone network, and increases the feature value of text area in the shallow network to strengthen the learning ability of the network to text features and suppress the expression of background features. Secondly, in view of the large difference in the aspect ratio of the scene text, this paper designs a convolution module with multi-branch structure and uses convolution kernel close to the shape of text to express the differentiated receptive field, and uses a lightweight attention mechanism to supplement the network's learning of the importance of channels with its parameters being only six times the number of channels. Finally, this paper improves the calculation formula of loss function on classification loss and detection box loss to weight text pixels and introduce the smallest rectangle covering prediction box and label box to express coincidence degree, thus improving the effectiveness of network training on text data sets. The results of ablation experiment and comparison experiment show that all the improvement measures of this method are effective, which achieves 83.3% and 82.4% F values on ICDAR2015 and MSRA-TD500 data sets, respectively, and performs well in the detection and comparison of difficult samples such as fuzzy text, light spot text and dense text.

                High-speed correlation tracking algorithm based on linear kernel function
                LIU Xinchang, FENG Lu, LI Jidong, MA Zhong, BI Ruixing
                2022, 39(11): 78-84.   doi: 10.19304/J.ISSN1000-7180.2021.1348
                Abstract(0) HTML(0) PDF (0)

                The existing research on visual target tracking mainly focuses on the improvement of tracking performance. The amount of computation is generally too large to run in real-time on embedded computing platforms with limited resources, which seriously affects the practical application of tracking algorithms. This paper analyzes the existing tracking algorithms, and proposes an improved high-speed kernelized correlation tracking algorithm. On the one hand, the linear kernel function is used to solve the problem of a large amount of kernel function calculation in the correlation operation, on the other hand, the algorithm flow is optimized, and multiple Fourier transform calculations are placed in the algorithm initialization stage, so as to avoid the large amount of Fourier transform calculation in the tracking process. Combining the above measures, the original tracking main cycle needs to calculate ten times Fourier transform (FFT) to three times FFT. And through quantitative experimental analysis and verification, the speed of the proposed algorithm is increased to 4-5 times that of the original tracking algorithm, while the tracking performance is basically unchanged. The proposed method in this paper dramatically reduces the computational complexity of high-performance tracking algorithms and has a good application prospect on embedded computing platforms with limited computing performance.

                Research on fault diagnosis method of analog circuit based on improved VMD and SVM
                LIU Peilin, LIU Meirong, HE Yigang, ZHAO Rui
                2022, 39(11): 85-94.   doi: 10.19304/J.ISSN1000-7180.2022.0167
                Abstract(0) HTML(0) PDF (0)

                The integration and complexity of analog circuits are getting higher and higher, and it is becoming more and more difficult to extract the characteristic information of its response. In order to solve the problem of extracting fault information, an algorithm combining variational modal decomposition (VMD) and compound multi-scale permutation entropy (CMPE) is proposed to construct a fault feature vector, and the support vector machine (SSA-SVM) optimized by the sparrow search algorithm is used to complete the fault classification. Firstly, the original signal at the time of failure is collected by the PSPICE software, and processed by VMD into multiple groups of IMF components containing the original signal characteristics. Secondly, the CMPE values of the first 3 IMF components are calculated, and the normalized processing is used as the fault feature vector. Finally, in the classification In-device training and testing. The simulation test shows that the final diagnosis accuracy rate of this scheme can reach 99.67%. Compared with other schemes, it can effectively improve the accuracy of fault diagnosis, and it is a feasible analog circuit fault diagnosis idea.

                An improved instrumentation amplifier with lowinput bias current
                ZENG Huili, XIAO Xiao, YOU Lu, WEI Hailong
                2022, 39(11): 95-101.   doi: 10.19304/J.ISSN1000-7180.2021.1195
                Abstract(0) HTML(0) PDF (0)

                This article discusses a low input bias current instrumentation amplifier, which mainly adopts ultra-β NPN input devices and cooperates with an improved bias current self-compensation structure, That is, by replacing the PJFET device with a lateral PNP device in the bias current compensation network, to prevent the gate leakage current of the PJFET device from affecting the circuit at high temperatures, and to realizes an extremely low input bias current of the order of pA and a bias current temperature drift coefficient of the order of pA/℃. Based on a 40V single aluminum bipolar process compatible with metal film resistors, this circuit has a high input impedancewhile ensuring low gain error, low offset voltage and high common mode rejection ratio, which is suitable for Small signal extraction and amplification in data collection systems.The test results show that the input bias current of the improved instrumentation amplifier at room temperature is 0.69nA, the input offset current is 0.12nA, and the input offset voltage is 21.54μV. When the gain G=100, the gain error is 0.05%, and the common mode rejection ratio is 119.41dB.

                A 50~64Gb/s DSP used in SERDES receiver
                LIU Min, ZHENG Xuqiang, LI Weijie, LIU Chaoyang, XU Hua, ZHANG Qiuyue, LIU Xinyu
                2022, 39(11): 102-109.   doi: 10.19304/J.ISSN1000-7180.2022.0261
                Abstract(0) HTML(0) PDF (0)

                This paper introduces a special digital signal processor (DSP) in SerDes receiver based on 4-pulse amplitude modulation (pam4). It is mainly committed to solving the data recovery problem in a high-speed serial interface under the ultra-high transmission rate of 50~64gb/s and 20-30db large channel attenuation. The 32 channels parallel structure of this DSP enables the system to process 50~64gb/s high-speed data signals; At the same time, 16 tap feedforward equalizer (FFE) is applied to solve the problem of data recovery under 20~30db large channel attenuation; The adaptive algorithm using the least mean square algorithm (LMS) is combined with FFE, so that it can adaptively find the best high-frequency compensation under different channel attenuation and eliminate the attenuation effect and inter symbol interference (ISI) caused by the transmission channel; At the same time, in order to solve the timing tension of the feedback loop caused by the parallel structure of the traditional decision feedback equalizer (DFE), a DFE with improved pre-decision structure is adopted, which is cascaded after the FFE to eliminate the remaining ISI and determine the correct data signal, so as to cooperate with the FFE to balance and recover the original data signal. This DSP architecture was manufactured by using 28nm CMOS process after simulation verification. Simulation and test verification found that it can achieve a good equalization effect at 50gb/s transmission rate and 20~30db channel attenuation. The final DSP chip area is 2.02 mm2, and the bit error rate is as low as 5.21e-9.

                Research on OCR model compression scheme for AIoT chips
                GAN Zhiying, XU Dawen
                2022, 39(11): 110-117.   doi: 10.19304/J.ISSN1000-7180.2022.0241
                Abstract(0) HTML(0) PDF (0)

                Deep learning-based OCR models usually consist of CNN and RNN/LSTM, which are computationally intensive and have many weight parameters, resulting in a large amount of computational resources required to achieve the performance requirements for inference in edge devices. general-purpose processors such as CPU and GPU cannot meet both processing speed and power requirements, and are very costly. With the popularity of deep learning, neural processing units NPUs are becoming common in many embedded and edge devices with high throughput computational power to handle the matrix operations involved in neural networks. An OCR model based on CRNN, for example, gives a solution for AIoT chips that reduces the redundancy of network parameters through two compression algorithms, pruning and quantization, to reduce the computational overhead but still obtain a compression model with high accuracy and robustness, enabling the model to be deployed on NPUs. Experimental results show that parameter quantization of the pruned and fine-tuned model reduces the accuracy of the quantized model by no more than 3% with a sparsity of 78% and compresses the model size from 15.87MB to 3.13MB. Deploying the compressed model to the NPU side, the NPU achieves a 28.87x and 6.1x speedup in latency compared to the implementations on the CPU and GPU, respectively.

                A configuration circuit design for Flash-based FPGA
                CAO Zhengzhou, LIU Guozhu, SHAN Yueer, SHEN Guangzhen, TU Bo, XU Yuting
                2022, 39(11): 118-128.   doi: 10.19304/J.ISSN1000-7180.2022.0285
                Abstract(0) HTML(0) PDF (0)

                In order to provide a stable erasing, programming and reading operating voltage for flash switch unit in flash-based FPGA, a configuration circuit for flash-based FPGA was designed based on 0.11 μm 2P8M flash process. According to the operating conditions of flash cell and the characteristics of flash-based FPGA, the configuration circuit is designed with hierarchical word line circuit, bit line circuit with check function, low ripple charge pump circuit, multi-level level conversion circuit, flexible substrate voltage circuit and configuration control circuit, which is the basis of the implementation of the configuration algorithm flow. It provides high precision and stable operating voltage for flash cell in the process of flash-based FPGA configuration, ensures the consistency of threshold voltage distribution of flash cell after erasure and programming, and gives full play to the performance of flash-based FPGA. The simulation results show that the driving capacity of the word line is 1.2 mA, the output voltage is -10.5 V, the error is less than ±0.1 V, and the establishment time is 11.2 μS. The bit line drive capacity is 1.2 mA, the output voltage is 8.8 V, the error is less than ±0.1 V, and the establishment time is 7.5 μS. In programming, the driving capacity of the word line is 1.2 mA, the output voltage is 9.8V, the error is less than ±0.1 V, and the establishment time is 2.1 μS. The bit line drive capacity is 4.4 mA, the output voltage is -8.0 V, the error is less than ±0.1 V, and the establishment time is 2.3 μS. The design meets the operating conditions of Flash cell, and finally realizes the configuration of 3.5 million flash-based FPGA with 26 836 992 bits (2 912 BL * 9 216 WL) bit streams.