Most Downloaded Articles

Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

Most Downloaded in Recent Year
Please wait a minute...
Joint bandwidth allocation and power control with interference constraints in multi-hop cognitive radio networks
Guang-xi ZHU, Xue-bing PEI, Dai-ming QU, Jian LIU, Qing-ping WANG, Gang SU
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 139-150.   DOI: 10.1631/jzus.C0910070
Abstract   PDF (470KB) ( 2173 )  
We investigate the bandwidth allocation and power control schemes in orthogonal frequency division multiplexing (OFDM) based multi-hop cognitive radio networks, and the color-sensitive graph coloring (CSGC) model is viewed as an efficient solution to the spectrum assignment problem. We extend the model by taking into account the power control strategy to avoid interference among secondary users and adapt dynamic topology. We formulate the optimization problem encompassing the channel allocation, power control with the interference constrained below a tolerable limit. The optimization objective with two different optimization strategies focuses on the routes rather than the links as in traditional approaches. A heuristic solution to this nondeterministic polynomial (NP)-hard problem is presented, which performs iterative channel allocation according to the lowest transmission power that guarantees the link connection and makes channel reuse as much as possible, and then the transmission power of each link is maximized to improve the channel capacity by gradually adding power level from the lowest transmission power until all co-channel links cannot satisfy the interference constraints. Numerical results show that our proposed strategies outperform the existing spectrum assignment algorithms in the performance of both the total network bandwidth and minimum route bandwidth of all routes, meanwhile, saving the transmission power.
Related Articles | Metrics
Modeling of hydraulic turbine systems based on a Bayesian-Gaussian neural network driven by sliding window data
Yi-jian LIU, Yan-jun FANG, Xue-mei ZHU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 56-62.   DOI: 10.1631/jzus.C0910176
Abstract   PDF (337KB) ( 2073 )  
In this paper, a novel Bayesian-Gaussian neural network (BGNN) is proposed and applied to on-line modeling of a hydraulic turbine system (HTS). The new BGNN takes account of the complex nonlinear characteristics of HTS. Two redefined training procedures of the BGNN include the off-line training of the threshold matrix parameters, optimized by swarm optimization algorithms, and the on-line BGNN predictive application driven by the sliding window data method. The characteristics models of an HTS are identified using the new BGNN method and simulation results are presented which show the effectiveness of the BGNN in addressing modeling problems of HTS.
Related Articles | Metrics
Cited: WebOfScience(2)
Robust lossless data hiding scheme
Xian-ting ZENG, Xue-zeng PAN, Ling-di PING, Zhuo LI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 101-110.   DOI: 10.1631/jzus.C0910177
Abstract   PDF (549KB) ( 2768 )  
This paper presents a robust lossless data hiding scheme. The original cover image can be recovered without any distortion after data extraction if the stego-image remains intact, and conversely, the hidden data can still be extracted correctly if the stego-image goes through JPEG compression to some extent. A cover image is divided into a number of non-overlapping blocks, and the arithmetic difference of each block is calculated. By shifting the arithmetic difference value, we can embed bits into the blocks. The shift quantity and shifting rule are fixed for all blocks, and reversibility is achieved. Furthermore, because the bit-0- and bit-1-zones are separated and the particularity of the arithmetic differences, minor changes applied to the stego-image generated by non-malicious attacks such as JPEG compression will not cause the bit-0- and bit-1-zones to overlap, and robustness is achieved. The new embedding mechanism can enhance embedding capacity and the addition of a threshold can make the algorithm more robust. Experimental results showed that, compared with previous schemes, the performance of the proposed scheme is significantly improved.
Related Articles | Metrics
Cited: WebOfScience(1)
Automatic actor-based program partitioning
Omid BUSHEHRIAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 45-55.   DOI: 10.1631/jzus.C0910096
Abstract   PDF (303KB) ( 2067 )  
Software reverse engineering techniques are applied most often to reconstruct the architecture of a program with respect to quality constraints, or non-functional requirements such as maintainability or reusability. In this paper, AOPR, a novel actor-oriented program reverse engineering approach, is proposed to reconstruct an object-oriented program architecture based on a high performance model such as an actor model. Reconstructing the program architecture based on this model results in the concurrent execution of the program invocations and consequently increases the overall performance of the program provided enough processors are available. The proposed reverse engineering approach applies a hill climbing clustering algorithm to find actors.
Related Articles | Metrics
Cited: WebOfScience(1)
Congestion avoidance, detection and alleviation in wireless sensor networks
Wei-wei FANG, Ji-ming CHEN, Lei SHU, Tian-shu CHU, De-pei QIAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 63-73.   DOI: 10.1631/jzus.C0910204
Abstract   PDF (638KB) ( 4338 )  
Congestion in wireless sensor networks (WSNs) not only causes severe information loss but also leads to excessive energy consumption. To address this problem, a novel scheme for congestion avoidance, detection and alleviation (CADA) in WSNs is proposed in this paper. By exploiting data characteristics, a small number of representative nodes are chosen from those in the event area as data sources, so that the source traffic can be suppressed proactively to avoid potential congestion. Once congestion occurs inevitably due to traffic mergence, it will be detected in a timely way by the hotspot node based on a combination of buffer occupancy and channel utilization. Congestion is then alleviated reactively by either dynamic traffic multiplexing or source rate regulation in accordance with the specific hotspot scenarios. Extensive simulation results under typical congestion scenarios are presented to illuminate the distinguished performance of the proposed scheme.
Related Articles | Metrics
Cited: WebOfScience(13)
Blind carrier frequency offset estimation for constant modulus signaling based OFDM systems: algorithm, identifiability, and performance analysis
Wei-yang XU, Bo LU, Xing-bo HU, Zhi-liang HONG
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 14-26.   DOI: 10.1631/jzus.C0910150
Abstract   PDF (445KB) ( 2388 )  
Carrier frequency offset (CFO) estimation is critical for orthogonal frequency-division multiplexing (OFDM) based transmissions. In this paper, we present a low-complexity, blind CFO estimator for OFDM systems with constant modulus (CM) signaling. Both single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems are considered. Based on the assumption that the channel keeps constant during estimation, we prove that the CFO can be estimated uniquely and exactly through minimizing the power difference of received data on the same subcarriers between two consecutive OFDM symbols; thus, the identifiability problem is assured. Inspired by the sinusoid-like cost function, curve fitting is utilized to simplify our algorithm. Performance analysis reveals that the proposed estimator is asymptotically unbiased and the mean square error (MSE) exhibits no error floor. We show that this blind scheme can also be applied to a MIMO system. Numerical simulations show that the proposed estimator provides excellent performance compared with existing blind methods.
Related Articles | Metrics
Cited: WebOfScience(2)
Automatic pectoral muscle boundary detection in mammograms based on Markov chain and active contour model
Lei WANG, Miao-liang ZHU, Li-ping DENG, Xin YUAN
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 111-118.   DOI: 10.1631/jzus.C0910025
Abstract   PDF (609KB) ( 2658 )  
Automatic pectoral muscle removal on medio-lateral oblique (MLO) view of mammogram is an essential step for many mammographic processing algorithms. However, it is still a very difficult task since the sizes, the shapes and the intensity contrasts of pectoral muscles change greatly from one MLO view to another. In this paper, we propose a novel method based on a discrete time Markov chain (DTMC) and an active contour model to automatically detect the pectoral muscle boundary. DTMC is used to model two important characteristics of the pectoral muscle edge, i.e., continuity and uncertainty. After obtaining a rough boundary, an active contour model is applied to refine the detection results. The experimental results on images from the Digital Database for Screening Mammography (DDSM) showed that our method can overcome many limitations of existing algorithms. The false positive (FP) and false negative (FN) pixel percentages are less than 5% in 77.5% mammograms. The detection precision of 91% meets the clinical requirement.
Related Articles | Metrics
Cited: WebOfScience(9)
Image compression based on spatial redundancy removal and image inpainting
Vahid BASTANI, Mohammad Sadegh HELFROUSH, Keyvan KASIRI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 92-100.   DOI: 10.1631/jzus.C0910182
Abstract   PDF (606KB) ( 4288 )  
We present an algorithm for image compression based on an image inpainting method. First the image regions that can be accurately recovered are located. Then, to reduce the data, information of such regions is removed. The remaining data besides essential details for recovering the removed regions are encoded to produce output data. At the decoder, an inpainting method is applied to retrieve removed regions using information extracted at the encoder. The image inpainting technique utilizes partial differential equations (PDEs) for recovering information. It is designed to achieve high performance in terms of image compression criteria. This algorithm was examined for various images. A high compression ratio of 1:40 was achieved at an acceptable quality. Experimental results showed attainable visible quality improvement at a high compression ratio compared with JPEG.
Related Articles | Metrics
Cited: WebOfScience(3)
Review of the current and future technologies for video compression
Lu YU, Jian-peng WANG
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 1-13.   DOI: 10.1631/jzus.C0910684
Abstract   PDF (339KB) ( 2845 )  
Many important developments in video compression technologies have occurred during the past two decades. The block-based discrete cosine transform with motion compensation hybrid coding scheme has been widely employed by most available video coding standards, notably the ITU-T H.26xand ISO/IEC MPEG-xfamilies and video part of China audio video coding standard (AVS). The objective of this paper is to provide a review of the developments of the four basic building blocks of hybrid coding scheme, namely predictive coding, transform coding, quantization and entropy coding, and give theoretical analyses and summaries of the technological advancements. We further analyze the development trends and perspectives of video compression, highlighting problems and research directions.
Related Articles | Metrics
Cited: WebOfScience(3)
Image driven shape deformation using styles
Guang-hua TAN, Wei CHEN, Li-gang LIU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 27-35.   DOI: 10.1631/jzus.C0910089
Abstract   PDF (452KB) ( 1749 )  
In this paper, we propose an image driven shape deformation approach for stylizing a 3D mesh using styles learned from existing 2D illustrations. Our approach models a 2D illustration as a planar mesh and represents the shape styles with four components: the object contour, the context curves, user-specified features and local shape details. After the correspondence between the input model and the 2D illustration is established, shape stylization is formulated as a style-constrained differential mesh editing problem. A distinguishing feature of our approach is that it allows users to directly transfer styles from hand-drawn 2D illustrations with individual perception and cognition, which are difficult to identify and create with 3D modeling and editing approaches. We present a sequence of challenging examples including unrealistic and exaggerated paintings to illustrate the effectiveness of our approach.
Related Articles | Metrics
Cited: WebOfScience(2)
A new protocol of wide use for e-mail with perfect forward secrecy
Tzung-her CHEN, Yan-ting WU
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 74-78.   DOI: 10.1631/jzus.A0910126
Abstract   PDF (219KB) ( 1910 )  
Recently, Sun et al. (2005) highlighted the essential property of perfect forward secrecy (PFS) for e-mail protocols when a higher security level is desirable. Furthermore, Sun et al. (2005)’s protocols take only a single e-mail server into account. Actually, it is much more common that the sender and the recipient register at different e-mail servers. Compared to existing protocols, the protocol proposed in this paper takes into account the scenario that the sender and the recipient register at different servers. The proposed protocol is skillfully designed to achieve PFS and end-to-end security as well as to satisfy the requirements of confidentiality, origin, integrity and easy key management. The comparison in terms of functionality and computational efficiency demonstrates the superiority of the present scheme.
Related Articles | Metrics
Cited: WebOfScience(1)
Computer vision based eyewear selector
Oscar DéNIZ, Modesto CASTRILLóN, Javier LORENZO, Luis ANTóN , Mario HERNANDEZ, Gloria BUENO
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 79-91.   DOI: 10.1631/jzus.C0910377
Abstract   PDF (1850KB) ( 3456 )  
The widespread availability of portable computing power and inexpensive digital cameras are opening up new possibilities for retailers in some markets. One example is in optical shops, where a number of systems exist that facilitate eyeglasses selection. These systems are now more necessary as the market is saturated with an increasingly complex array of lenses, frames, coatings, tints, photochromic and polarizing treatments, etc. Research challenges encompass Computer Vision, Multimedia and Human-Computer Interaction. Cost factors are also of importance for widespread product acceptance. This paper describes a low-cost system that allows the user to visualize different glasses models in live video. The user can also move the glasses to adjust its position on the face. The system, which runs at 9.5 frames/s on general-purpose hardware, has a homeostatic module that keeps image parameters controlled. This is achieved by using a camera with motorized zoom, iris, white balance, etc. This feature can be specially useful in environments with changing illumination and shadows, like in an optical shop. The system also includes a face and eye detection module and a glasses management module.
Related Articles | Metrics
Cited: WebOfScience(3)
Antenna-in-package system integrated with meander line antenna based on LTCC technology
Gang DONG,Wei XIONG,Zhao-yao WU,Yin-tang YANG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 67-73.   DOI: 10.1631/FITEE.1500167
Abstract   HTML PDF (2719KB) ( 772 )  

We present an antenna-in-package system integrated with a meander line antenna based on low temperature co-fired ceramic (LTCC) technology. The proposed system employs a meander line patch antenna, a packaging layer, and a laminated multi-chip module (MCM) for integration of integrated circuit (IC) bare chips. A microstrip feed line is used to reduce the interaction between patch and package. To decrease electromagnetic coupling, a via hole structure is designed and analyzed. The meander line antenna achieved a bandwidth of 220 MHz with the center frequency at 2.4 GHz, a maximum gain of 2.2 dB, and a radiation efficiency about 90% over its operational frequency. The whole system, with a small size of 20.2 mm × 6.1 mm × 2.6 mm, can be easily realized by a standard LTCC process. This antenna-in-package system integrated with a meander line antenna was fabricated and the experimental results agreed with simulations well.

Table and Figures | Reference | Related Articles | Metrics
Multi-instance learning for software quality estimation in object-oriented systems: a case study
Peng HUANG, Jie ZHU
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 130-138.   DOI: 10.1631/jzus.C0910084
Abstract   PDF (160KB) ( 2273 )  
We investigate a problem of object-oriented (OO) software quality estimation from a multi-instance (MI) perspective. In detail, each set of classes that have an inheritance relation, named ‘class hierarchy’, is regarded as a bag, while each class in the set is regarded as an instance. The learning task in this study is to estimate the label of unseen bags, i.e., the fault-proneness of untested class hierarchies. A fault-prone class hierarchy contains at least one fault-prone (negative) class, while a non-fault-prone (positive) one has no negative class. Based on the modification records (MRs) of the previous project releases and OO software metrics, the fault-proneness of an untested class hierarchy can be predicted. Several selected MI learning algorithms were evaluated on five datasets collected from an industrial software project. Among the MI learning algorithms investigated in the experiments, the kernel method using a dedicated MI-kernel was better than the others in accurately and correctly predicting the fault-proneness of the class hierarchies. In addition, when compared to a supervised support vector machine (SVM) algorithm, the MI-kernel method still had a competitive performance with much less cost.
Related Articles | Metrics
Cited: WebOfScience(2)
Improved switching based filter for protecting thin lines of color images
Chang-cheng WU, Chun-yu ZHAO, Da-yue CHEN
Front. Inform. Technol. Electron. Eng.    2010, 11 (1): 36-44.   DOI: 10.1631/jzus.C0910145
Abstract   PDF (581KB) ( 1998 )  
The classical vector median filter (VMF) has been widely used to remove impulse noise from color images. However, since the VMF cannot identify thin lines during the denoising process, many thin lines may be removed out as noise. This serious problem can be solved by a newly proposed filter that uses a noise detector to find these thin lines and then keep them unchanged. In this new approach, the noise detection scheme applied on a current processed pixel is realized through counting the close pixels in its eight neighbor positions and the expanded window to see whether the current pixel is corrupted by impulse noise. Based on the previous outputs, our algorithm can increase the performance in detecting and canceling the impulse noise. Extensive experiments indicate that this approach can be used to remove the impulse noise from a color image without distorting the useful information.
Related Articles | Metrics
Cited: WebOfScience(1)
Extracting hand articulations from monocular depth images using curvature scale space descriptors
Shao-fan WANG,Chun LI,De-hui KONG,Bao-cai YIN
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 41-54.   DOI: 10.1631/FITEE.1500126
Abstract   HTML PDF (2840KB) ( 529 )  

We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

Table and Figures | Reference | Related Articles | Metrics
Dr. Hadoop: an infinite scalable metadata management for Hadoop—How the baby elephant becomes immortal
Dipayan DEV,Ripon PATGIRI
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 15-31.   DOI: 10.1631/FITEE.1500015
Abstract   HTML PDF (1278KB) ( 1202 )  

In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named ‘Dr. Hadoop’ after the name of the authors.

Table and Figures | Reference | Related Articles | Metrics
Improving the efficiency of magnetic coupling energy transfer by etching fractal patterns in the shielding metals*
Qing-feng LI,Shao-bo CHEN,Wei-ming WANG,Hong-wei HAO,Lu-ming LI
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 74-82.   DOI: 10.1631/FITEE.1500114
Abstract   HTML PDF (905KB) ( 777 )  

Thin metal sheets are often located in the coupling paths of magnetic coupling energy transfer (MCET) systems. Eddy currents in the metals reduce the energy transfer efficiency and can even present safety risks. This paper describes the use of etched fractal patterns in the metals to suppress the eddy currents and improve the efficiency. Simulation and experimental results show that this approach is very effective. The fractal patterns should satisfy three features, namely, breaking the metal edge, etching in the high-intensity magnetic field region, and etching through the metal in the thickness direction. Different fractal patterns lead to different results. By altering the eddy current distribution, the fractal pattern slots reduce the eddy current losses when the metals show resistance effects and suppress the induced magnetic field in the metals when the metals show inductance effects. Fractal pattern slots in multilayer high conductivity metals (e.g., Cu) reduce the induced magnetic field intensity significantly. Furthermore, transfer power, transfer efficiency, receiving efficiency, and eddy current losses all increase with the increase of the number of etched layers. These results can benefit MCET by efficient energy transfer and safe use in metal shielded equipment.

Table and Figures | Reference | Related Articles | Metrics
Adaptive fuzzy integral sliding mode velocity control for the cutting system of a trench cutter
Qi-yan TIAN,Jian-hua WEI,Jin-hui FANG,Kai GUO
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 55-66.   DOI: 10.1631/FITEE.15a0160
Abstract   HTML PDF (1392KB) ( 621 )  

This paper presents a velocity controller for the cutting system of a trench cutter (TC). The cutting velocity of a cutting system is affected by the unknown load characteristics of rock and soil. In addition, geological conditions vary with time. Due to the complex load characteristics of rock and soil, the cutting load torque of a cutter is related to the geological conditions and the feeding velocity of the cutter. Moreover, a cutter’s dynamic model is subjected to uncertainties with unknown effects on its function. In this study, to deal with the particular characteristics of a cutting system, a novel adaptive fuzzy integral sliding mode control (AFISMC) is designed for controlling cutting velocity. The model combines the robust characteristics of an integral sliding mode controller with the adaptive adjusting characteristics of an adaptive fuzzy controller. The AFISMC cutting velocity controller is synthesized using the backstepping technique. The stability of the whole system including the fuzzy inference system, integral sliding mode controller, and the cutting system is proven using the Lyapunov theory. Experiments have been conducted on a TC test bench with the AFISMC under different operating conditions. The experimental results demonstrate that the proposed AFISMC cutting velocity controller gives a superior and robust velocity tracking performance.

Table and Figures | Reference | Related Articles | Metrics
Efficient dynamic pruning on largest scores first (LSF) retrieval
Kun JIANG, Yue-xiang YANG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 1-14.   DOI: 10.1631/FITEE.1500190
Abstract   HTML PDF (451KB) ( 639 )  

Inverted index traversal techniques have been studied in addressing the query processing performance challenges of web search engines, but still leave much room for improvement. In this paper, we focus on the inverted index traversal on document-sorted indexes and the optimization technique called dynamic pruning, which can efficiently reduce the hardware computational resources required. We propose another novel exhaustive index traversal scheme called largest scores first (LSF) retrieval, in which the candidates are first selected in the posting list of important query terms with the largest upper bound scores and then fully scored with the contribution of the remaining query terms. The scheme can effectively reduce the memory consumption of existing term-at-a-time (TAAT) and the candidate selection cost of existing document-at-a-time (DAAT) retrieval at the expense of revisiting the posting lists of the remaining query terms. Preliminary analysis and implementation show comparable performance between LSF and the two well-known baselines. To further reduce the number of postings that need to be revisited, we present efficient rank safe dynamic pruning techniques based on LSF, including two important optimizations called list omitting (LSF_LO) and partial scoring (LSF_PS) that make full use of query term importance. Finally, experimental results with the TREC GOV2 collection show that our new index traversal approaches reduce the query latency by almost 27% over the WAND baseline and produce slightly better results compared with the MaxScore baseline, while returning the same results as exhaustive evaluation.

Table and Figures | Reference | Related Articles | Metrics
Proactive worm propagation modeling and analysis in unstructured peer-to-peer networks
Xiao-song ZHANG, Ting CHEN, Jiong ZHENG, Hua LI
Front. Inform. Technol. Electron. Eng.    2010, 11 (2): 119-129.   DOI: 10.1631/jzus.C0910488
Abstract   PDF (423KB) ( 2506 )  
It is universally acknowledged by network security experts that proactive peer-to-peer (P2P) worms may soon engender serious threats to the Internet infrastructures. These latent threats stimulate activities of modeling and analysis of the proactive P2P worm propagation. Based on the classical two-factor model, in this paper, we propose a novel proactive worm propagation model in unstructured P2P networks (called the four-factor model) by considering four factors: (1) network topology, (2) countermeasures taken by Internet service providers (ISPs) and users, (3) configuration diversity of nodes in the P2P network, and (4) attack and defense strategies. Simulations and experiments show that proactive P2P worms can be slowed down by two ways: improvement of the configuration diversity of the P2P network and using powerful rules to reinforce the most connected nodes from being compromised. The four-factor model provides a better description and prediction of the proactive P2P worm propagation.
Related Articles | Metrics
Cited: WebOfScience(8)
Image meshing via hierarchical optimization
Hao XIE,Ruo-feng TONG
Front. Inform. Technol. Electron. Eng.    2016, 17 (1): 32-40.   DOI: 10.1631/FITEE.1500171
Abstract   HTML PDF (1027KB) ( 581 )  

Vector graphic , as a kind of geometric representation of raster images, has many advantages, e.g., definition independence and editing facility. A popular way to convert raster images into vector graphics is {image meshing}, the aim of which is to find a mesh to represent an image as faithfully as possible. For traditional meshing algorithms, the crux of the problem resides mainly in the high non-linearity and non-smoothness of the objective, which makes it difficult to find a desirable optimal solution. To ameliorate this situation, we present a hierarchical optimization algorithm solving the problem from coarser levels to finer ones, providing initialization for each level with its coarser ascent. To further simplify the problem, the original non-convex problem is converted to a linear least squares one, and thus becomes convex, which makes the problem much easier to solve. A dictionary learning framework is used to combine geometry and topology elegantly. Then an alternating scheme is employed to solve both parts. Experiments show that our algorithm runs fast and achieves better results than existing ones for most images.

Table and Figures | Reference | Related Articles | Metrics
Optimal array factor radiation pattern synthesis for linear antenna array using cat swarm optimization: validation by an electromagnetic simulator
Gopi Ram , Durbadal Mandal , Sakti Prasad Ghoshal , Rajib Kar
Front. Inform. Technol. Electron. Eng.    2017, 18 (4): 570-577.   DOI: 10.1631/FITEE.1500371
Abstract   PDF (0KB) ( 1012 )  
In this paper, an optimal design of linear antenna arrays having microstrip patch antenna elements has been carried out. Cat swarm optimization (CSO) has been applied for the optimization of the control parameters of radiation pattern of an antenna array. The optimal radiation patterns of isotropic antenna elements are obtained by optimizing the current excitation weight of each element and the inter-element spacing. The antenna arrays of 12, 16, and 20 elements are taken as examples. The arrays are designed by using MATLAB computation and are validated through Computer Simulation Technology-Microwave Studio (CST-MWS). From the simulation results it is evident that CSO is able to yield the optimal design of linear antenna arrays of patch antenna elements.
Reference | Related Articles | Metrics
Online detection of bursty events and their evolution in news streams
Wei Chen, Chun Chen, Li-jun Zhang, Can Wang, Jia-jun Bu
Front. Inform. Technol. Electron. Eng.    2010, 11 (5): 340-355.   DOI: 10.1631/jzus.C0910245
Abstract   PDF (250KB) ( 1341 )  
Online monitoring of temporally-sequenced news streams for interesting patterns and trends has gained popularity in the last decade. In this paper, we study a particular news stream monitoring task: timely detection of bursty events which have happened recently and discovery of their evolutionary patterns along the timeline. Here, a news stream is represented as feature streams of tens of thousands of features (i.e., keyword. Each news story consists of a set of keywords.). A bursty event therefore is composed of a group of bursty features, which show bursty rises in frequency as the related event emerges. In this paper, we give a formal definition to the above problem and present a solution with the following steps: (1) applying an online multi-resolution burst detection method to identify bursty features with different bursty durations within a recent time period; (2) clustering bursty features to form bursty events and associating each event with a power value which reflects its bursty level; (3) applying an information retrieval method based on cosine similarity to discover the event’s evolution (i.e., highly related bursty events in history) along the timeline. We extensively evaluate the proposed methods on the Reuters Corpus Volume 1. Experimental results show that our methods can detect bursty events in a timely way and effectively discover their evolution. The power values used in our model not only measure event’s bursty level or relative importance well at a certain time point but also show relative strengths of events along the same evolution.
Related Articles | Metrics
Cited: WebOfScience(5)
Optimized simulated annealing algorithm for thinning and weighting large planar arrays
Peng Chen, Bin-jian Shen, Li-sheng Zhou, Yao-wu Chen
Front. Inform. Technol. Electron. Eng.    2010, 11 (4): 261-269.   DOI: 10.1631/jzus.C0910037
Abstract   PDF (439KB) ( 1542 )  
This paper proposes an optimized simulated annealing (SA) algorithm for thinning and weighting large planar arrays in 3D underwater sonar imaging systems. The optimized algorithm has been developed for use in designing a 2D planar array (a rectangular grid with a circular boundary) with a fixed side-lobe peak and a fixed current taper ratio under a narrow-band excitation. Four extensions of the SA algorithm and the procedure for the optimized SA algorithm are described. Two examples of planar arrays are used to assess the efficiency of the optimized method. The proposed method achieves a similar beam pattern performance with fewer active transducers and faster convergence ability than previous SA algorithms.
Related Articles | Metrics
Cited: WebOfScience(13)
A sparse matrix model-based optical proximity correction algorithm with model-based mapping between segments and control sites
Bin Lin, Xiao-lang Yan, Zheng Shi, Yi-wei Yang
Front. Inform. Technol. Electron. Eng.    2011, 12 (5): 436-442.   DOI: 10.1631/jzus.C1000219
Abstract   PDF (469KB) ( 2113 )  
Optical proximity correction (OPC) is a key step in modern integrated circuit (IC) manufacturing. The quality of model-based OPC (MB-OPC) is directly determined by segment offsets after OPC processing. However, in conventional MB-OPC, the intensity of a control site is adjusted only by the movement of its corresponding segment; this scheme is no longer accurate enough as the lithography process advances. On the other hand, matrix MB-OPC is too time-consuming to become practical. In this paper, we propose a new sparse matrix MB-OPC algorithm with model-based mapping between segments and control sites. We put forward the concept of ‘sensitive area’. When the Jacobian matrix used in the matrix MB-OPC is evaluated, only the elements that correspond to the segments in the sensitive area of every control site need to be calculated, while the others can be set to 0. The new algorithm can effectively improve the sparsity of the Jacobian matrix, and hence reduce the computations. Both theoretical analysis and experiments show that the sparse matrix MB-OPC with model-based mapping is more accurate than conventional MB-OPC, and much faster than matrix MB-OPC while maintaining high accuracy.
Related Articles | Metrics
Cited: WebOfScience(4)
An efficient hardware design for HDTV H.264/AVC encoder
Liang Wei, Dan-dan Ding, Juan Du, Bin-bin Yu, Lu Yu
Front. Inform. Technol. Electron. Eng.    2011, 12 (6): 499-506.   DOI: 10.1631/jzus.C1000201
Abstract   PDF (187KB) ( 2532 )  
This paper presents a hardware efficient high definition television (HDTV) encoder for H.264/AVC. We use a two-level mode decision (MD) mechanism to reduce the complexity and maintain the performance, and design a sharable architecture for normal mode fractional motion estimation (NFME), special mode fractional motion estimation (SFME), and luma motion compensation (LMC), to decrease the hardware cost. Based on these technologies, we adopt a four-stage macro-block pipeline scheme using an efficient memory management strategy for the system, which greatly reduces on-chip memory and bandwidth requirements. The proposed encoder uses about 1126k gates with an average Bjontegaard-Delta peak signal-to-noise ratio (BD-PSNR) decrease of 0.5 dB, compared with JM15.0. It can fully satisfy the real-time video encoding for 1080p@30 frames/s of H.264/AVC high profile.
Related Articles | Metrics
Cited: WebOfScience(2)
Cross-media analysis and reasoning: advances and directions
Yu-xin Peng, Wen-wu Zhu, Yao Zhao, Chang-sheng Xu, Qing-ming Huang, Han-qing Lu, Qing-hua Zheng, Tie-jun Huang, Wen Gao
Front. Inform. Technol. Electron. Eng.    2017, 18 (1): 44-57.   DOI: 10.1631/FITEE.1601787
Abstract   PDF (0KB) ( 533 )  
Cross-media analysis and reasoning is an active research area in computer science, and a promising direction for artificial intelligence. However, to the best of our knowledge, no existing work has summarized the state-of-the-art methods for cross-media analysis and reasoning or presented advances, challenges, and future directions for the field. To address these issues, we provide an overview as follows: (1) theory and model for cross-media uniform representation; (2) cross-media correlation understanding and deep mining; (3) cross-media knowledge graph construction and learning methodologies; (4) cross-media knowledge evolution and reasoning; (5) cross-media description and generation; (6) cross-media intelligent engines; and (7) cross-media intelligent applications. By presenting approaches, advances, and future directions in cross-media analysis and reasoning, our goal is not only to draw more attention to the state-of-the-art advances in the field, but also to provide technical insights by discussing the challenges and research directions in these areas.
Reference | Related Articles | Metrics
EDA: an enhanced dual-active algorithm for location privacy preservation in mobile P2P networks
Yan-zhe Che, Kevin Chiew, Xiao-yan Hong, Qiang Yang, Qin-ming He
Front. Inform. Technol. Electron. Eng.    2013, 14 (5): 356-373.   DOI: 10.1631/jzus.C1200267
Abstract   PDF (0KB) ( 1027 )  
Various solutions have been proposed to enable mobile users to access location-based services while preserving their location privacy. Some of these solutions are based on a centralized architecture with the participation of a trustworthy third party, whereas some other approaches are based on a mobile peer-to-peer (P2P) architecture. The former approaches suffer from the scalability problem when networks grow large, while the latter have to endure either low anonymization success rates or high communication overheads. To address these issues, this paper deals with an enhanced dual-active spatial cloaking algorithm (EDA) for preserving location privacy in mobile P2P networks. The proposed EDA allows mobile users to collect and actively disseminate their location information to other users. Moreover, to deal with the challenging characteristics of mobile P2P networks, e.g., constrained network resources and user mobility, EDA enables users (1) to perform a negotiation process to minimize the number of duplicate locations to be shared so as to significantly reduce the communication overhead among users, (2) to predict user locations based on the latest available information so as to eliminate the inaccuracy problem introduced by using some out-of-date locations, and (3) to use a latest-record-highest-priority (LRHP) strategy to reduce the probability of broadcasting fewer useful locations. Extensive simulations are conducted for a range of P2P network scenarios to evaluate the performance of EDA in comparison with the existing solutions. Experimental results demonstrate that the proposed EDA can improve the performance in terms of anonymity and service time with minimized communication overhead.
Related Articles | Metrics
Modified extremal optimization for the hard maximum satisfiability problem
Guo-qiang Zeng, Yong-zai Lu, Wei-Jie Mao
Front. Inform. Technol. Electron. Eng.    2011, 12 (7): 589-596.   DOI: 10.1631/jzus.C1000313
Abstract   PDF (196KB) ( 1884 )  
Based on our recent study on probability distributions for evolution in extremal optimization (EO), we propose a modified framework called EOSAT to approximate ground states of the hard maximum satisfiability (MAXSAT) problem, a generalized version of the satisfiability (SAT) problem. The basic idea behind EOSAT is to generalize the evolutionary probability distribution in the Bose-Einstein-EO (BE-EO) algorithm, competing with other popular algorithms such as simulated annealing and WALKSAT. Experimental results on the hard MAXSAT instances from SATLIB show that the modified algorithms are superior to the original BE-EO algorithm.
Related Articles | Metrics
Cited: WebOfScience(8)

NoticeMore

Links