Most Downloaded Articles

Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

Most Downloaded in Recent Year
Please wait a minute...
Novel linear search for support vector machine parameter selection
Hong-xia Pang, Wen-de Dong, Zhi-hai Xu, Hua-jun Feng, Qi Li, Yue-ting Chen
Front. Inform. Technol. Electron. Eng.    2011, 12 (11): 885-896.   DOI: 10.1631/jzus.C1100006
Abstract   PDF (959KB) ( 1965 )  
Selecting the optimal parameters for support vector machine (SVM) has long been a hot research topic. Aiming for support vector classification/regression (SVC/SVR) with the radial basis function (RBF) kernel, we summarize the rough line rule of the penalty parameter and kernel width, and propose a novel linear search method to obtain these two optimal parameters. We use a direct-setting method with thresholds to set the epsilon parameter of SVR. The proposed method directly locates the right search field, which greatly saves computing time and achieves a stable, high accuracy. The method is more competitive for both SVC and SVR. It is easy to use and feasible for a new data set without any adjustments, since it requires no parameters to set.
Related Articles | Metrics
Cited: WebOfScience(3)
Proactive worm propagation modeling and analysis in unstructured peer-to-peer networks
Xiao-song ZHANG, Ting CHEN, Jiong ZHENG, Hua LI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 119-129.   DOI: 10.1631/jzus.C0910488
Abstract   PDF (423KB) ( 2426 )  
It is universally acknowledged by network security experts that proactive peer-to-peer (P2P) worms may soon engender serious threats to the Internet infrastructures. These latent threats stimulate activities of modeling and analysis of the proactive P2P worm propagation. Based on the classical two-factor model, in this paper, we propose a novel proactive worm propagation model in unstructured P2P networks (called the four-factor model) by considering four factors: (1) network topology, (2) countermeasures taken by Internet service providers (ISPs) and users, (3) configuration diversity of nodes in the P2P network, and (4) attack and defense strategies. Simulations and experiments show that proactive P2P worms can be slowed down by two ways: improvement of the configuration diversity of the P2P network and using powerful rules to reinforce the most connected nodes from being compromised. The four-factor model provides a better description and prediction of the proactive P2P worm propagation.
Related Articles | Metrics
Cited: WebOfScience(8)
Online detection of bursty events and their evolution in news streams
Wei Chen, Chun Chen, Li-jun Zhang, Can Wang, Jia-jun Bu
Front. Inform. Technol. Electron. Eng.    2010, 11 (5): 340-355.   DOI: 10.1631/jzus.C0910245
Abstract   PDF (250KB) ( 1319 )  
Online monitoring of temporally-sequenced news streams for interesting patterns and trends has gained popularity in the last decade. In this paper, we study a particular news stream monitoring task: timely detection of bursty events which have happened recently and discovery of their evolutionary patterns along the timeline. Here, a news stream is represented as feature streams of tens of thousands of features (i.e., keyword. Each news story consists of a set of keywords.). A bursty event therefore is composed of a group of bursty features, which show bursty rises in frequency as the related event emerges. In this paper, we give a formal definition to the above problem and present a solution with the following steps: (1) applying an online multi-resolution burst detection method to identify bursty features with different bursty durations within a recent time period; (2) clustering bursty features to form bursty events and associating each event with a power value which reflects its bursty level; (3) applying an information retrieval method based on cosine similarity to discover the event’s evolution (i.e., highly related bursty events in history) along the timeline. We extensively evaluate the proposed methods on the Reuters Corpus Volume 1. Experimental results show that our methods can detect bursty events in a timely way and effectively discover their evolution. The power values used in our model not only measure event’s bursty level or relative importance well at a certain time point but also show relative strengths of events along the same evolution.
Related Articles | Metrics
Cited: WebOfScience(5)
A sparse matrix model-based optical proximity correction algorithm with model-based mapping between segments and control sites
Bin Lin, Xiao-lang Yan, Zheng Shi, Yi-wei Yang
Front. Inform. Technol. Electron. Eng.    2011, 12 (5): 436-442.   DOI: 10.1631/jzus.C1000219
Abstract   PDF (469KB) ( 2095 )  
Optical proximity correction (OPC) is a key step in modern integrated circuit (IC) manufacturing. The quality of model-based OPC (MB-OPC) is directly determined by segment offsets after OPC processing. However, in conventional MB-OPC, the intensity of a control site is adjusted only by the movement of its corresponding segment; this scheme is no longer accurate enough as the lithography process advances. On the other hand, matrix MB-OPC is too time-consuming to become practical. In this paper, we propose a new sparse matrix MB-OPC algorithm with model-based mapping between segments and control sites. We put forward the concept of ‘sensitive area’. When the Jacobian matrix used in the matrix MB-OPC is evaluated, only the elements that correspond to the segments in the sensitive area of every control site need to be calculated, while the others can be set to 0. The new algorithm can effectively improve the sparsity of the Jacobian matrix, and hence reduce the computations. Both theoretical analysis and experiments show that the sparse matrix MB-OPC with model-based mapping is more accurate than conventional MB-OPC, and much faster than matrix MB-OPC while maintaining high accuracy.
Related Articles | Metrics
Cited: WebOfScience(4)
Efficient implementation of a cubic-convolution based image scaling engine
Xiang Wang, Yong Ding, Ming-yu Liu, Xiao-lang Yan
Front. Inform. Technol. Electron. Eng.    2011, 12 (9): 743-753.   DOI: 10.1631/jzus.C1100040
Abstract   PDF (844KB) ( 2578 )  
In video applications, real-time image scaling techniques are often required. In this paper, an efficient implementation of a scaling engine based on 4×4 cubic convolution is proposed. The cubic convolution has a better performance than other traditional interpolation kernels and can also be realized on hardware. The engine is designed to perform arbitrary scaling ratios with an image resolution smaller than 2560×1920 pixels and can scale up or down, in horizontal or vertical direction. It is composed of four functional units and five line buffers, which makes it more competitive than conventional architectures. A strict fixed-point strategy is applied to minimize the quantization errors of hardware realization. Experimental results show that the engine provides a better image quality and a comparatively lower hardware cost than reference implementations.
Related Articles | Metrics
Cited: WebOfScience(1)
Automatic actor-based program partitioning
Omid BUSHEHRIAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 45-55.   DOI: 10.1631/jzus.C0910096
Abstract   PDF (303KB) ( 1956 )  
Software reverse engineering techniques are applied most often to reconstruct the architecture of a program with respect to quality constraints, or non-functional requirements such as maintainability or reusability. In this paper, AOPR, a novel actor-oriented program reverse engineering approach, is proposed to reconstruct an object-oriented program architecture based on a high performance model such as an actor model. Reconstructing the program architecture based on this model results in the concurrent execution of the program invocations and consequently increases the overall performance of the program provided enough processors are available. The proposed reverse engineering approach applies a hill climbing clustering algorithm to find actors.
Related Articles | Metrics
Cited: WebOfScience(1)
Robust lossless data hiding scheme
Xian-ting ZENG, Xue-zeng PAN, Ling-di PING, Zhuo LI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 101-110.   DOI: 10.1631/jzus.C0910177
Abstract   PDF (549KB) ( 2660 )  
This paper presents a robust lossless data hiding scheme. The original cover image can be recovered without any distortion after data extraction if the stego-image remains intact, and conversely, the hidden data can still be extracted correctly if the stego-image goes through JPEG compression to some extent. A cover image is divided into a number of non-overlapping blocks, and the arithmetic difference of each block is calculated. By shifting the arithmetic difference value, we can embed bits into the blocks. The shift quantity and shifting rule are fixed for all blocks, and reversibility is achieved. Furthermore, because the bit-0- and bit-1-zones are separated and the particularity of the arithmetic differences, minor changes applied to the stego-image generated by non-malicious attacks such as JPEG compression will not cause the bit-0- and bit-1-zones to overlap, and robustness is achieved. The new embedding mechanism can enhance embedding capacity and the addition of a threshold can make the algorithm more robust. Experimental results showed that, compared with previous schemes, the performance of the proposed scheme is significantly improved.
Related Articles | Metrics
Cited: WebOfScience(1)
Review of the current and future technologies for video compression
Lu YU, Jian-peng WANG
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 1-13.   DOI: 10.1631/jzus.C0910684
Abstract   PDF (339KB) ( 2739 )  
Many important developments in video compression technologies have occurred during the past two decades. The block-based discrete cosine transform with motion compensation hybrid coding scheme has been widely employed by most available video coding standards, notably the ITU-T H.26xand ISO/IEC MPEG-xfamilies and video part of China audio video coding standard (AVS). The objective of this paper is to provide a review of the developments of the four basic building blocks of hybrid coding scheme, namely predictive coding, transform coding, quantization and entropy coding, and give theoretical analyses and summaries of the technological advancements. We further analyze the development trends and perspectives of video compression, highlighting problems and research directions.
Related Articles | Metrics
Cited: WebOfScience(3)
Modeling of hydraulic turbine systems based on a Bayesian-Gaussian neural network driven by sliding window data
Yi-jian LIU, Yan-jun FANG, Xue-mei ZHU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 56-62.   DOI: 10.1631/jzus.C0910176
Abstract   PDF (337KB) ( 1947 )  
In this paper, a novel Bayesian-Gaussian neural network (BGNN) is proposed and applied to on-line modeling of a hydraulic turbine system (HTS). The new BGNN takes account of the complex nonlinear characteristics of HTS. Two redefined training procedures of the BGNN include the off-line training of the threshold matrix parameters, optimized by swarm optimization algorithms, and the on-line BGNN predictive application driven by the sliding window data method. The characteristics models of an HTS are identified using the new BGNN method and simulation results are presented which show the effectiveness of the BGNN in addressing modeling problems of HTS.
Related Articles | Metrics
Cited: WebOfScience(2)
Computer vision based eyewear selector
Oscar DéNIZ, Modesto CASTRILLóN, Javier LORENZO, Luis ANTóN , Mario HERNANDEZ, Gloria BUENO
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 79-91.   DOI: 10.1631/jzus.C0910377
Abstract   PDF (1850KB) ( 3355 )  
The widespread availability of portable computing power and inexpensive digital cameras are opening up new possibilities for retailers in some markets. One example is in optical shops, where a number of systems exist that facilitate eyeglasses selection. These systems are now more necessary as the market is saturated with an increasingly complex array of lenses, frames, coatings, tints, photochromic and polarizing treatments, etc. Research challenges encompass Computer Vision, Multimedia and Human-Computer Interaction. Cost factors are also of importance for widespread product acceptance. This paper describes a low-cost system that allows the user to visualize different glasses models in live video. The user can also move the glasses to adjust its position on the face. The system, which runs at 9.5 frames/s on general-purpose hardware, has a homeostatic module that keeps image parameters controlled. This is achieved by using a camera with motorized zoom, iris, white balance, etc. This feature can be specially useful in environments with changing illumination and shadows, like in an optical shop. The system also includes a face and eye detection module and a glasses management module.
Related Articles | Metrics
Cited: WebOfScience(3)
Design of a novel low power 8-transistor 1-bit full adder cell
Yi Wei, Ji-zhong Shen
Front. Inform. Technol. Electron. Eng.    2011, 12 (7): 604-607.   DOI: 10.1631/jzus.C1000372
Abstract   PDF (124KB) ( 5319 )  
An addition is a fundamental arithmetic operation which is used extensively in many very large-scale integration (VLSI) systems such as application-specific digital signal processing (DSP) and microprocessors. An adder determines the overall performance of the circuits in most of those systems. In this paper we propose a novel 1-bit full adder cell which uses only eight transistors. In this design, three multiplexers and one inverter are applied to minimize the transistor count and reduce power consumption. The power dissipation, propagation delay, and power-delay produced using the new design are analyzed and compared with those of other designs using HSPICE simulations. The results show that the proposed adder has both lower power consumption and a lower power-delay product (PDP) value. The low power and low transistor count make the novel 8T full adder cell a candidate for power-efficient applications.
Related Articles | Metrics
Cited: WebOfScience(2)
Joint bandwidth allocation and power control with interference constraints in multi-hop cognitive radio networks
Guang-xi ZHU, Xue-bing PEI, Dai-ming QU, Jian LIU, Qing-ping WANG, Gang SU
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 139-150.   DOI: 10.1631/jzus.C0910070
Abstract   PDF (470KB) ( 2055 )  
We investigate the bandwidth allocation and power control schemes in orthogonal frequency division multiplexing (OFDM) based multi-hop cognitive radio networks, and the color-sensitive graph coloring (CSGC) model is viewed as an efficient solution to the spectrum assignment problem. We extend the model by taking into account the power control strategy to avoid interference among secondary users and adapt dynamic topology. We formulate the optimization problem encompassing the channel allocation, power control with the interference constrained below a tolerable limit. The optimization objective with two different optimization strategies focuses on the routes rather than the links as in traditional approaches. A heuristic solution to this nondeterministic polynomial (NP)-hard problem is presented, which performs iterative channel allocation according to the lowest transmission power that guarantees the link connection and makes channel reuse as much as possible, and then the transmission power of each link is maximized to improve the channel capacity by gradually adding power level from the lowest transmission power until all co-channel links cannot satisfy the interference constraints. Numerical results show that our proposed strategies outperform the existing spectrum assignment algorithms in the performance of both the total network bandwidth and minimum route bandwidth of all routes, meanwhile, saving the transmission power.
Related Articles | Metrics
Removal of baseline wander from ECG signal based on a statistical weighted moving average filter
Xiao Hu, Zhong Xiao, Ni Zhang
Front. Inform. Technol. Electron. Eng.    2011, 12 (5): 397-403.   DOI: 10.1631/jzus.C1010311
Abstract   PDF (276KB) ( 3024 )  
Baseline wander is a common noise in electrocardiogram (ECG) results. To effectively correct the baseline and to preserve more underlying components of an ECG signal, we propose a simple and novel filtering method based on a statistical weighted moving average filter. Supposed a and b are the minimum and maximum of all sample values within a moving window, respectively. First, the whole region [a, b] is divided into M equal sub-regions without overlap. Second, three sub-regions with the largest sample distribution probabilities are chosen (except M<3) and incorporated into one region, denoted as [a0, b0] for simplicity. Third, for every sample point in the moving window, its weight is set to 1 if its value falls in [a0, b0]; otherwise, its weight is 0. Last, all sample points with weight 1 are averaged to estimate the baseline. The algorithm was tested by simulated ECG signal and real ECG signal from www.physionet.org. The results showed that the proposed filter could more effectively extract baseline wander from ECG signal and affect the morphological feature of ECG signal considerably less than both the traditional moving average filter and wavelet package translation did.
Related Articles | Metrics
Cited: WebOfScience(5)
Dr. Hadoop: an infinite scalable metadata management for Hadoop—How the baby elephant becomes immortal
Dipayan DEV,Ripon PATGIRI
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 15-31.   DOI: 10.1631/FITEE.1500015
Abstract   HTML PDF (1278KB) ( 1168 )  

In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named ‘Dr. Hadoop’ after the name of the authors.

Table and Figures | Reference | Related Articles | Metrics
Image driven shape deformation using styles
Guang-hua TAN, Wei CHEN, Li-gang LIU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 27-35.   DOI: 10.1631/jzus.C0910089
Abstract   PDF (452KB) ( 1654 )  
In this paper, we propose an image driven shape deformation approach for stylizing a 3D mesh using styles learned from existing 2D illustrations. Our approach models a 2D illustration as a planar mesh and represents the shape styles with four components: the object contour, the context curves, user-specified features and local shape details. After the correspondence between the input model and the 2D illustration is established, shape stylization is formulated as a style-constrained differential mesh editing problem. A distinguishing feature of our approach is that it allows users to directly transfer styles from hand-drawn 2D illustrations with individual perception and cognition, which are difficult to identify and create with 3D modeling and editing approaches. We present a sequence of challenging examples including unrealistic and exaggerated paintings to illustrate the effectiveness of our approach.
Related Articles | Metrics
Cited: WebOfScience(2)
Design of ternary D flip-flop with pre-set and pre-reset functions based on resonant tunneling diode literal circuit
Mi Lin, Wei-feng Lv, Ling-ling Sun
Front. Inform. Technol. Electron. Eng.    2011, 12 (6): 507-514.   DOI: 10.1631/jzus.C1000222
Abstract   PDF (177KB) ( 2418 )  
The problems existing in the binary logic system and the advantages of multiple-valued logic (MVL) are introduced. A literal circuit with three-track-output structure is created based on resonant tunneling diodes (RTDs) and it has the most basic memory function. A ternary RTD D flip-flop with pre-set and pre-reset functions is also designed, the key module of which is the RTD literal circuit. Two types of output structure of the ternary RTD D flip-flop are optional: one is three-track and the other is single-track; these two structures can be transformed conveniently by merely adding tri-valued RTD NAND, NOR, and inverter units after the three-track output. The design is verified by simulation. Ternary flip-flop consists of an RTD literal circuit and it not only is easy to understand and implement but also provides a solution for the algebraic interface between the multiple-valued logic and the binary logic. The method can also be used for design of other types of multiple-valued RTD flip-flop circuits.
Related Articles | Metrics
Cited: WebOfScience(2)
Improving the efficiency of magnetic coupling energy transfer by etching fractal patterns in the shielding metals*
Qing-feng LI,Shao-bo CHEN,Wei-ming WANG,Hong-wei HAO,Lu-ming LI
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 74-82.   DOI: 10.1631/FITEE.1500114
Abstract   HTML PDF (905KB) ( 745 )  

Thin metal sheets are often located in the coupling paths of magnetic coupling energy transfer (MCET) systems. Eddy currents in the metals reduce the energy transfer efficiency and can even present safety risks. This paper describes the use of etched fractal patterns in the metals to suppress the eddy currents and improve the efficiency. Simulation and experimental results show that this approach is very effective. The fractal patterns should satisfy three features, namely, breaking the metal edge, etching in the high-intensity magnetic field region, and etching through the metal in the thickness direction. Different fractal patterns lead to different results. By altering the eddy current distribution, the fractal pattern slots reduce the eddy current losses when the metals show resistance effects and suppress the induced magnetic field in the metals when the metals show inductance effects. Fractal pattern slots in multilayer high conductivity metals (e.g., Cu) reduce the induced magnetic field intensity significantly. Furthermore, transfer power, transfer efficiency, receiving efficiency, and eddy current losses all increase with the increase of the number of etched layers. These results can benefit MCET by efficient energy transfer and safe use in metal shielded equipment.

Table and Figures | Reference | Related Articles | Metrics
Improved switching based filter for protecting thin lines of color images
Chang-cheng WU, Chun-yu ZHAO, Da-yue CHEN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 36-44.   DOI: 10.1631/jzus.C0910145
Abstract   PDF (581KB) ( 1904 )  
The classical vector median filter (VMF) has been widely used to remove impulse noise from color images. However, since the VMF cannot identify thin lines during the denoising process, many thin lines may be removed out as noise. This serious problem can be solved by a newly proposed filter that uses a noise detector to find these thin lines and then keep them unchanged. In this new approach, the noise detection scheme applied on a current processed pixel is realized through counting the close pixels in its eight neighbor positions and the expanded window to see whether the current pixel is corrupted by impulse noise. Based on the previous outputs, our algorithm can increase the performance in detecting and canceling the impulse noise. Extensive experiments indicate that this approach can be used to remove the impulse noise from a color image without distorting the useful information.
Related Articles | Metrics
Cited: WebOfScience(1)
Automatic pectoral muscle boundary detection in mammograms based on Markov chain and active contour model
Lei WANG, Miao-liang ZHU, Li-ping DENG, Xin YUAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 111-118.   DOI: 10.1631/jzus.C0910025
Abstract   PDF (609KB) ( 2551 )  
Automatic pectoral muscle removal on medio-lateral oblique (MLO) view of mammogram is an essential step for many mammographic processing algorithms. However, it is still a very difficult task since the sizes, the shapes and the intensity contrasts of pectoral muscles change greatly from one MLO view to another. In this paper, we propose a novel method based on a discrete time Markov chain (DTMC) and an active contour model to automatically detect the pectoral muscle boundary. DTMC is used to model two important characteristics of the pectoral muscle edge, i.e., continuity and uncertainty. After obtaining a rough boundary, an active contour model is applied to refine the detection results. The experimental results on images from the Digital Database for Screening Mammography (DDSM) showed that our method can overcome many limitations of existing algorithms. The false positive (FP) and false negative (FN) pixel percentages are less than 5% in 77.5% mammograms. The detection precision of 91% meets the clinical requirement.
Related Articles | Metrics
Cited: WebOfScience(9)
Congestion avoidance, detection and alleviation in wireless sensor networks
Wei-wei FANG, Ji-ming CHEN, Lei SHU, Tian-shu CHU, De-pei QIAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 63-73.   DOI: 10.1631/jzus.C0910204
Abstract   PDF (638KB) ( 4229 )  
Congestion in wireless sensor networks (WSNs) not only causes severe information loss but also leads to excessive energy consumption. To address this problem, a novel scheme for congestion avoidance, detection and alleviation (CADA) in WSNs is proposed in this paper. By exploiting data characteristics, a small number of representative nodes are chosen from those in the event area as data sources, so that the source traffic can be suppressed proactively to avoid potential congestion. Once congestion occurs inevitably due to traffic mergence, it will be detected in a timely way by the hotspot node based on a combination of buffer occupancy and channel utilization. Congestion is then alleviated reactively by either dynamic traffic multiplexing or source rate regulation in accordance with the specific hotspot scenarios. Extensive simulation results under typical congestion scenarios are presented to illuminate the distinguished performance of the proposed scheme.
Related Articles | Metrics
Cited: WebOfScience(13)
Adaptive fuzzy integral sliding mode velocity control for the cutting system of a trench cutter
Qi-yan TIAN,Jian-hua WEI,Jin-hui FANG,Kai GUO
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 55-66.   DOI: 10.1631/FITEE.15a0160
Abstract   HTML PDF (1392KB) ( 591 )  

This paper presents a velocity controller for the cutting system of a trench cutter (TC). The cutting velocity of a cutting system is affected by the unknown load characteristics of rock and soil. In addition, geological conditions vary with time. Due to the complex load characteristics of rock and soil, the cutting load torque of a cutter is related to the geological conditions and the feeding velocity of the cutter. Moreover, a cutter’s dynamic model is subjected to uncertainties with unknown effects on its function. In this study, to deal with the particular characteristics of a cutting system, a novel adaptive fuzzy integral sliding mode control (AFISMC) is designed for controlling cutting velocity. The model combines the robust characteristics of an integral sliding mode controller with the adaptive adjusting characteristics of an adaptive fuzzy controller. The AFISMC cutting velocity controller is synthesized using the backstepping technique. The stability of the whole system including the fuzzy inference system, integral sliding mode controller, and the cutting system is proven using the Lyapunov theory. Experiments have been conducted on a TC test bench with the AFISMC under different operating conditions. The experimental results demonstrate that the proposed AFISMC cutting velocity controller gives a superior and robust velocity tracking performance.

Table and Figures | Reference | Related Articles | Metrics
Antenna-in-package system integrated with meander line antenna based on LTCC technology
Gang DONG,Wei XIONG,Zhao-yao WU,Yin-tang YANG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 67-73.   DOI: 10.1631/FITEE.1500167
Abstract   HTML PDF (2719KB) ( 720 )  

We present an antenna-in-package system integrated with a meander line antenna based on low temperature co-fired ceramic (LTCC) technology. The proposed system employs a meander line patch antenna, a packaging layer, and a laminated multi-chip module (MCM) for integration of integrated circuit (IC) bare chips. A microstrip feed line is used to reduce the interaction between patch and package. To decrease electromagnetic coupling, a via hole structure is designed and analyzed. The meander line antenna achieved a bandwidth of 220 MHz with the center frequency at 2.4 GHz, a maximum gain of 2.2 dB, and a radiation efficiency about 90% over its operational frequency. The whole system, with a small size of 20.2 mm × 6.1 mm × 2.6 mm, can be easily realized by a standard LTCC process. This antenna-in-package system integrated with a meander line antenna was fabricated and the experimental results agreed with simulations well.

Table and Figures | Reference | Related Articles | Metrics
Image compression based on spatial redundancy removal and image inpainting
Vahid BASTANI, Mohammad Sadegh HELFROUSH, Keyvan KASIRI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 92-100.   DOI: 10.1631/jzus.C0910182
Abstract   PDF (606KB) ( 4184 )  
We present an algorithm for image compression based on an image inpainting method. First the image regions that can be accurately recovered are located. Then, to reduce the data, information of such regions is removed. The remaining data besides essential details for recovering the removed regions are encoded to produce output data. At the decoder, an inpainting method is applied to retrieve removed regions using information extracted at the encoder. The image inpainting technique utilizes partial differential equations (PDEs) for recovering information. It is designed to achieve high performance in terms of image compression criteria. This algorithm was examined for various images. A high compression ratio of 1:40 was achieved at an acceptable quality. Experimental results showed attainable visible quality improvement at a high compression ratio compared with JPEG.
Related Articles | Metrics
Cited: WebOfScience(3)
Extracting hand articulations from monocular depth images using curvature scale space descriptors
Shao-fan WANG,Chun LI,De-hui KONG,Bao-cai YIN
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 41-54.   DOI: 10.1631/FITEE.1500126
Abstract   HTML PDF (2840KB) ( 494 )  

We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

Table and Figures | Reference | Related Articles | Metrics
Efficient dynamic pruning on largest scores first (LSF) retrieval
Kun JIANG, Yue-xiang YANG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 1-14.   DOI: 10.1631/FITEE.1500190
Abstract   HTML PDF (451KB) ( 606 )  

Inverted index traversal techniques have been studied in addressing the query processing performance challenges of web search engines, but still leave much room for improvement. In this paper, we focus on the inverted index traversal on document-sorted indexes and the optimization technique called dynamic pruning, which can efficiently reduce the hardware computational resources required. We propose another novel exhaustive index traversal scheme called largest scores first (LSF) retrieval, in which the candidates are first selected in the posting list of important query terms with the largest upper bound scores and then fully scored with the contribution of the remaining query terms. The scheme can effectively reduce the memory consumption of existing term-at-a-time (TAAT) and the candidate selection cost of existing document-at-a-time (DAAT) retrieval at the expense of revisiting the posting lists of the remaining query terms. Preliminary analysis and implementation show comparable performance between LSF and the two well-known baselines. To further reduce the number of postings that need to be revisited, we present efficient rank safe dynamic pruning techniques based on LSF, including two important optimizations called list omitting (LSF_LO) and partial scoring (LSF_PS) that make full use of query term importance. Finally, experimental results with the TREC GOV2 collection show that our new index traversal approaches reduce the query latency by almost 27% over the WAND baseline and produce slightly better results compared with the MaxScore baseline, while returning the same results as exhaustive evaluation.

Table and Figures | Reference | Related Articles | Metrics
Non-uniform B-spline curves with multiple shape parameters
Juan Cao, Guo-zhao Wang
Front. Inform. Technol. Electron. Eng.    2011, 12 (10): 800-808.   DOI: 10.1631/jzus.C1000381
Abstract   PDF (554KB) ( 1921 )  
We introduce a kind of shape-adjustable spline curves defined over a non-uniform knot sequence. These curves not only have the many valued properties of the usual non-uniform B-spline curves, but also are shape adjustable under fixed control polygons. Our method is based on the degree elevation of B-spline curves, where maximum degrees of freedom are added to a curve parameterized in terms of a non-uniform B-spline. We also discuss the geometric effect of the adjustment of shape parameters and propose practical shape modification algorithms, which are indispensable from the user’s perspective.
Related Articles | Metrics
Cited: WebOfScience(1)
Image meshing via hierarchical optimization
Hao XIE,Ruo-feng TONG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 32-40.   DOI: 10.1631/FITEE.1500171
Abstract   HTML PDF (1027KB) ( 551 )  

Vector graphic , as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is {image meshing}, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

Table and Figures | Reference | Related Articles | Metrics
A new protocol of wide use for e-mail with perfect forward secrecy
Tzung-her CHEN, Yan-ting WU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 74-78.   DOI: 10.1631/jzus.A0910126
Abstract   PDF (219KB) ( 1807 )  
Recently, Sun et al. (2005) highlighted the essential property of perfect forward secrecy (PFS) for e-mail protocols when a higher security level is desirable. Furthermore, Sun et al. (2005)’s protocols take only a single e-mail server into account. Actually, it is much more common that the sender and the recipient register at different e-mail servers. Compared to existing protocols, the protocol proposed in this paper takes into account the scenario that the sender and the recipient register at different servers. The proposed protocol is skillfully designed to achieve PFS and end-to-end security as well as to satisfy the requirements of confidentiality, origin, integrity and easy key management. The comparison in terms of functionality and computational efficiency demonstrates the superiority of the present scheme.
Related Articles | Metrics
Cited: WebOfScience(1)
Blind carrier frequency offset estimation for constant modulus signaling based OFDM systems: algorithm, identifiability, and performance analysis
Wei-yang XU, Bo LU, Xing-bo HU, Zhi-liang HONG
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 14-26.   DOI: 10.1631/jzus.C0910150
Abstract   PDF (445KB) ( 2273 )  
Carrier frequency offset (CFO) estimation is critical for orthogonal frequency-division multiplexing (OFDM) based transmissions. In this paper, we present a low-complexity, blind CFO estimator for OFDM systems with constant modulus (CM) signaling. Both single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems are considered. Based on the assumption that the channel keeps constant during estimation, we prove that the CFO can be estimated uniquely and exactly through minimizing the power difference of received data on the same subcarriers between two consecutive OFDM symbols; thus, the identifiability problem is assured. Inspired by the sinusoid-like cost function, curve fitting is utilized to simplify our algorithm. Performance analysis reveals that the proposed estimator is asymptotically unbiased and the mean square error (MSE) exhibits no error floor. We show that this blind scheme can also be applied to a MIMO system. Numerical simulations show that the proposed estimator provides excellent performance compared with existing blind methods.
Related Articles | Metrics
Cited: WebOfScience(2)
An efficient hardware design for HDTV H.264/AVC encoder
Liang Wei, Dan-dan Ding, Juan Du, Bin-bin Yu, Lu Yu
Front. Inform. Technol. Electron. Eng.    2011, 12 (6): 499-506.   DOI: 10.1631/jzus.C1000201
Abstract   PDF (187KB) ( 2467 )  
This paper presents a hardware efficient high definition television (HDTV) encoder for H.264/AVC. We use a two-level mode decision (MD) mechanism to reduce the complexity and maintain the performance, and design a sharable architecture for normal mode fractional motion estimation (NFME), special mode fractional motion estimation (SFME), and luma motion compensation (LMC), to decrease the hardware cost. Based on these technologies, we adopt a four-stage macro-block pipeline scheme using an efficient memory management strategy for the system, which greatly reduces on-chip memory and bandwidth requirements. The proposed encoder uses about 1126k gates with an average Bjontegaard-Delta peak signal-to-noise ratio (BD-PSNR) decrease of 0.5 dB, compared with JM15.0. It can fully satisfy the real-time video encoding for 1080p@30 frames/s of H.264/AVC high profile.
Related Articles | Metrics
Cited: WebOfScience(2)

NoticeMore

Links