Assistant Professor
Concordia University, Concordia Institute of Information Systems Engineering (CIISE)
I received B.Sc. degree form ECE Department at University of Tehran in 2005, the M.Sc. degree from BME Department at Amirkabir University of Technology (Tehran Polytechnic) in 2007, and Ph.D. degree from EECS department at York University in 2013. From 2013 to 2015, I was a Post-Doctoral Fellow at the Multimedia Lab, in the ECE Department, at the University of Toronto working with Professor K.N. Plataniotis.
I am currently an Associate Professor with Concordia Institute of Information Systems Engineering (CIISE), at Concordia University, Montreal, Canada. I have joined CIISE at the Rank of Assistant Professor in December 1st 2015 and have been promoted to the Rank of Associate Professor with tenure effective June 1st, 2020 (after 4 years and 6 months).
I am serving IEEE Signal Processing Society at the capacity of Director Membership-Services. I am also the Vice-Chair of IEEE Signal Processing Montreal Chapter, and; Serving as the General Chair of Symposium on Advanced Bio-Signal Processing and Machine Learning for Medical Cyber-Physical Systems under 2018 IEEE Global Signal Processing Conference (GlobalSIP).
I was the Lead Guest Editor for an Special Issue in IEEE Transactions on Signal and Information Processing Over Networks entitled "Distributed Signal Processing for Security and Privacy in Networked Cyber-Physical Systems"; General Chair of Symposium on Advanced Bio-Signal Processing and Rehabilitation and Assistive Systems under 2017 IEEE Global Signal Processing Conference (GlobalSIP); Organizing Committee of Special Session on Bio Signal Processing for Movement Assessment, Neuro-Rehabilitation and Assistive Technologies, under 2017 IEEE International Conference on Systems, Man and Cybernetics, and; Organizing Committee Chair of "IEEE Signal Processing Society Winter School on Distributed Signal Processing for Secure Cyber-Physical Systems".
I am a Senior Member of the IEEE; A member of IEEE Signal Processing Society Membership Board; IEEE Signal Processing Society Publication Board, and IEEE Signal Processing Society Conference Board.
Concordia University, Concordia Institute of Information Systems Engineering (CIISE)
University of Toronto, Department of Electrical and Computer Engineering
Ph.D. in Electrical Engineering
York University Department of Electrical Engineering and Computer Science
Master of Science in Biomedical Engineering
Amirkabir University of Technology (Tehran Polytechnic)
Bachelor of Electrical Engineering
University of Tehran
Symposium on "Symposium on Advanced Bio-Signal Processing and Machine Learning for Medical Cyber-Physical Systems " under 2018 IEEE Global Signal Processing Conference (GlobalSIP).
Symposium on Information Processing, Learning and Optimization for Smart Energy Infrastructures, under 2018 IEEE Global Signal Processing Conference (GlobalSIP).
Special Issue entitled Distributed Signal Processing for Security and Privacy in Networked Cyber-Physical Systems published in IEEE Transactions on Signal and Information Processing Over Networks
Symposium on "Symposium on Advanced Bio-Signal Processing and Rehabilitation and Assistive Systems" under 2017 IEEE Global Signal Processing Conference (GlobalSIP).
Special Session entitled Bio Signal Processing for Movement Assessment, Neuro-Rehabilitation and Assistive Technologies, under 2017 IEEE International Conference on Systems, Man and Cybernetics (SMC).
Symposium on Signal and Information Processing for Finance and Business, under 2017 IEEE Global Signal Processing Conference (GlobalSIP).
IEEE Global Communications Conference (GLOBECOM) 2016), workshop on Cyber-Physical Smart Grid Security and Resilience (SGSR).
International Conference on Current Research in Signal Processing & Communications (SPC 2016).
2015 IEEE Student Conference on Research and Development (SCOReD 2015).
IEEE Transactions on Vehicular Technology.
IEEE Transactions on Aerospace and Electronic Systems.
IEEE Signal Processing Magazine.
IEEE Computer Magazine.
IEEE Transactions on Signal Processing.
IEEE Signal Processing Letters.
IEEE Transactions on Medical Imaging.
IEEE Transactions on Signal and Information Processing over Networks.
Signal Processing.
Journal of the Franklin Institute.
International Journal of Electrical Power and Energy Systems.
Asian Journal of Control.
We live in an era of data deluge. The volume, variety, and velocity of data is exploding and the ability to process such large amounts of information promises to limit the spread of epidemics, learn the dynamics of emergent social-computational systems, and protect critical infrastructures. Of particular interest to this talk is the big data collected from Cyber-Physical Systems (CPS), which exhibit a wide range of diversities. The CPSs are engineering systems with embedded control, communication and sensing capabilities that can interact with humans through cyber space. Recently there has been a surge of interest in practical and opportunistic applications of CPSs including: (i) State prediction for analyzing contingencies and taking preventive actions against possible failures in smart power grids; (ii) Optimizing the reliability of CPSs using decentralized sensor resource management techniques, and; (iii) Surveillance applications for following a reference target in decentralized camera networks. State forecasting is the core part of all these problems, which is the focus of this talk.
I am seeking energetic, innovative, and hard-working graduate students interested in working with me on statistical signal/image processing, secure/event-based processing in Cyber-Physical Systems (CPS), information fusion, and communications.
If you are interested in joining my research group, please read my research publications and send me an email including your CV and a list of your research publications (if any). Your email should also indicate: (i) The research area of your interest and why you are interested; (ii) Your research plans (if any), e.g., prepare a research proposal, and; (iii) Your past research experiences.
Please note that I will respond to an email only if I am interested in your background. Otherwise, please assume that I am not interested.
A Successful candidate should have strong background in systems and mathematics, strong analytical skills, and familiarity with Matlab.
Methodologies for quality engineering: six sigma, ACE (Achieving Competitive Excellence), Lean engineering, ISO9000 series; comparative study, quality clinic process charts, relentless root cause analysis, mistake proofing, market feedback analysis, process improvement and waste elimination, visual control, standard work and process management, process certification, setup reduction, total productive maintenance, DMAIC and DMADV processes, define phase, project charter, project scoping and planning, measure phase, critical to quality requirements, quality functional deployment, analyze phase, functional and process requirements, design requirements, design concepts, high-level design capability elaboration and evaluation, design phase, detailed design capability elaboration and evaluation, failure mode and effects analysis, control and verification plans, verify phase, pilot-scale processes, pilot testing and evaluation, implementation planning, full-scale processes, start-up and testing, performance evaluation, turnover to operations and maintenance, transition to process management, project closure.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
Data Mining is a collection of techniques for discovering hidden knowledge in the rapidly growing data in governments, businesses, sciences, internet, and other information sources. Many applications of data mining, however, pose security and privacy threats to the general public. This course studies the security issues caused by the advancement of data mining technologies.
This course teaches basic concepts, models, methods and tools in maintenance management. The related reliability concepts, deterministic replacement, preventive maintenance and condition based maintenance will be discussed. Case studies will be performed.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course is about quality assurance in Supply Chain Management. It introduces supply chain principals and quality assurance issues in supply chain systems. The following issues will be covered: definitions, models, and evolution of supply chain management, quality attributes, uncertainty, information technology and decision support systems, ebusiness transaction, inventory management, verification and security issues, strategic alliances, sourcing decisions.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course examines the fundamental concepts, techniques and tools for risk analysis and decision making strategies in the context of information and systems engineering. This includes qualitative and quantitative risk assessment, risks in systems engineering, environmental risks, security risks; methods of risk analysis, fault trees and event trees; quantification of probabilities, use of data, models, and expert judgements; risks and decisions, interlinking risk analysis with risk management; decision analysis; system analysis and quantification; uncertainty modeling and risk measurement; and project risk management.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course teaches basic concepts, models, methods and tools in maintenance management. The related reliability concepts, deterministic replacement, preventive maintenance and condition based maintenance will be discussed. Case studies will be performed.
This course teaches basic concepts, models, methods and tools in maintenance management. The related reliability concepts, deterministic replacement, preventive maintenance and condition based maintenance will be discussed. Case studies will be performed.
This course contains some mathematical background required in many other engineering advanced courses and many real world engineering applications. By the end of this course, students should learn a particular set of mathematical facts and how to apply them and more importantly should be able to think logically and mathematically. Five important themes are interwoven in the textbook and lectures: mathematical reasoning, combinatorial analysis, discrete structures, algorithmic thinking, and application and modeling.
This course teaches basic concepts, models, methods and tools in maintenance management. The related reliability concepts, deterministic replacement, preventive maintenance and condition based maintenance will be discussed. Case studies will be performed.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
York University, Toronto, Canada, course offered by the Departments of Computer Sciece and Engineering.
The global aging phenomenon has increased the number of individuals with age-related neurological movement disorders including Parkinson’s Disease (PD) and Essential Tremor (ET). Pathological Hand Tremor (PHT), which is considered among the most common motor symptoms of such disorders, can severely affect patients’ independence and quality of life. To develop advanced rehabilitation and assistive technologies, accurate estimation/prediction of nonstationary PHT is critical, however, the required level of accuracy has not yet been achieved. The lack of sizable datasets and generalizable modeling techniques that can fully represent the spectrotemporal characteristics of the PHT have been a critical bottleneck in attaining this goal. The paper addresses this unmet need through establishing a deep recurrent model to predict and eliminate the PHT component of hand motion. More specifically, we propose a machine learning-based, assumption-free, and real-time PHT elimination framework, the PHTNet, by incorporating deep bidirectional recurrent neural networks. The PHTNet is developed over a hand motion dataset of 81 ET and PD patients collected systematically in a movement disorders clinic over 3 years. The PHTNet is the first intelligent systems model developed on this scale for PHT elimination that maximizes the resolution of estimation and allows for prediction of future and upcoming sub-movements.
Smart manufacturing and industrial Internet of Things (IoT) have transformed the maintenance management concept from the conventional perspective of being reactive to being predictive. Recent advancements in this regard has resulted in development of effective Prognostic Health Management (PHM) frameworks, which coupled with deep learning architectures have produced sophisticated techniques for Remaining Useful Life (RUL) estimation. Accurately predicting the RUL significantly empowers the decision making process and allows deployment of advanced maintenance strategies to improve the overall outcome in a timely fashion. In light of this, the paper proposes a novel noisy deep learning architecture consisting of multiple models designed in parallel, referred to as noisy and hybrid deep architecture for remaining useful life estimation (NBLSTM). The proposed NBLSTM is designed by integration of two parallel noisy deep architectures, i.e., a noisy Convolutional Neural Network (CNN) to extract spatial features, and a noisy Bidirectional Long Short-Term Memory (BLSTM) to extract temporal information learning the dependencies of input data in both forward and backward directions. The two paths are connected through a fusion center consisting of fully connected multilayers, which combines their outputs and forms the target predicted RUL. To improve the robustness of the model, the NBLSTM is trained based on noisy input signals leading to significantly robust and enhanced generalization behavior. The proposed NBLSTM model is evaluated and tested based on the Commercial Modular Aero-Propulsion System Simulation (CMAPSS) dataset provided by NASA, illustrating state-of-the-art results in comparison to its counterparts.
With the rapid emergence of Internet of Things (IoT), we are more and more surrounded by smart connected devices with integrated sensing, processing, and communication capabilities. Bluetooth Low Energy (BLE), referred to as Bluetooth Smart, is considered as the main-stream technology to perform identification and localization/tracking in IoT applications. While single-model BLE-based tracking has been investigated from different aspects, application of multi-model (hybrid) solutions are still in their infancy. In this regard, the letter proposes a novel BLE-based tracking framework, referred to as the STUPEFY, which incorporates set-valued information within box particle filtering context. More specifically, the proposed multiple-model STUPEFY framework consists of three integrated modules, i.e., an intriguing Smoothing Module based on Kalman filtering to reduce the RSSI fluctuations and facilitate comparison of Gaussian models of the RSSI values in distribution with the learned ones; A learning-based model (Cooperation Module) utilized in an intuitive fashion to provide/construct a coarse estimate of the target’s location together with the smallest axes-aligned box containing the ellipsoid associated with each zone’s learned RSSI distribution, and; A novel set-valued box particle filtering (SBPF) approach (Micro-Localization Module). The proposed STUPEFY framework is evaluated based on real BLE datasets and results illustrate significant potentials in terms of improving overall BLE-based achievable tracking accuracy.
Storing the most popular contents in local caching nodes, including Femto Access Points (FAPs) and user devices, supported by Device-to-Device (D2D) communications, is a promising solution to deal with the backhaul bottleneck and improves content download latency. Toward this goal, we present an enterprise wireless network consisting of both FAPs and Mobile Users (MUs), in which MUs communicate directly with each other via D2D communication. Despite all the benefits that came with the D2D communication, this type of connection leads to consuming the battery of users’ devices. On the other hand, regarding the fact that the user’s mobility is one of the inherent features of wireless networks, particularly in a large enterprise that femtocells operate in an open access mode, the small transmission range of FAPs leads to triggering frequent handovers. Taking the above challenges into account, we propose a novel Mobility-Aware Femtocaching scheme based on Handover (MAFH) in order to reduce the number of unnecessary handovers and increase the battery life of users’ devices. In this regard, the best caching node is selected by considering the velocity of users as decision criteria to download the required content. Moreover, a random walk model is assumed to illustrate the mobility pattern’s of users to implement a practical model. The effectiveness of the proposed MAFH algorithm is evaluated in terms of the cache hit ratio, transferred byte volume, connecting time, content delivery time, the number of handovers, and the energy consumption of clients.
An unavoidable revolution is upon us changing mainstream of electricity supply from a thermal-dominant profile to a renewable-supplied grid. This revolution comes with vital benefits for sustainability of electricity generation. However, several challenges, especially at high levels of renewable energy sources (RESs) penetration, need to be addressed. In particular, and due to economic reasons and stability issues, RESs are underdogs in competition with fossil-fuel generations, unless proper incentives are provided added to consumers’ electricity bills. The other challenge is their intermittent output, which compromises the grid efficiency and increases the consumers’ electricity bill. The aforementioned issues can be resolved, using demand response, which is limited upon the load flexibility. Volunteer load-shedding could help if the consumers are willing to voluntarily shed their unnecessary loads. This paper investigates the impact of planned outage rate on the required incentive, to reach different levels of RES penetration. To illustrate the effectiveness of consumers’ contribution for integration of RESs and utility grids, their collaboration impact is explored against a numerical system constructed based on real-historical-data. The results demonstrate positive impact of consumers’ contribution by substantial reduction in the incentive. The saving margin, then, shall be used to evaluate the value of consumers’ volunteer load-shedding.
This article proposes a quantitative measure of the load profile that can be used to compare demand response techniques for load shaping. This quantitative measure is a projection of the overall cost for electric power generation planning, and can be used to more effectively guide the determination of dynamic electric energy retail pricing tariffs that can improve the performance of demand response techniques from the overall generation expansion planning and utilization cost perspective. For several years, the peak to average ratio has been the popular choice to quantitatively measure the load profile, although it does not incorporate critical measures such as minimum power demand and load variation. In this paper, several load profiles are synthesized and the cost for their electricity supply is calculated. Then, the correlation coefficient between different statistical measures of the load profiles and the cost of optimal generation expansion planning and utilization is explored. Several statistical factors of the load profile that show a linear relationship with the overall cost are selected and their contribution level to the final measure is determined to form a quantitative load profile measure that accurately reflects the overall supply cost.
Referred to as the RQ-CEASE, the paper proposes a resilient framework for quantized, event-triggered, sampled-data, average consensus in multi-agent systems subject to denial of service (DoS) attacks. The DoS attacks typically attempt to block the measurement and communication channels in the network. Two different event-triggering (ET) approaches are considered in RQCEASE based on whether the ET threshold is dependent or independent of the state dynamics. For each approach, we analytically derive operating conditions (bounds) for the sampling period and ET design parameter guaranteeing the input-to-state (ISS) stability of the network under DoS attacks. In addition, upper bounds for duration and frequency of DoS attacks are derived within which the network remains operational. For each approach, the maximum possible error from the average consensus value is derived. The resilience of the two RQ-CEASE approaches to DoS attacks, as well as their steady-state consensus error, and transmission savings are compared both analytically and using simulations.
In this paper, we propose a novel approach for event-triggered performance guaranteed containment control (EPiCC) in linear multi-agent systems. To extend longevity of the system and reduce the amount of data exchanges, a distributed event-triggered transmission condition is incorporated within the EPiCC implementation. A second objective of EPiCC is to co-design containment parameters, namely the control gain and transmission threshold, which collectively guarantee an exponential rate of containment. The control gain has a degree of resilience such that containment can be achieved even with some perturbation in its nominally designed value. Using the Lyapunov stability theorem, sufficient conditions for event-triggered exponential containment with resilient control gain are expressed in terms of a linear matrix inequality optimization. An objective function which incorporates the number of events and control effort is minimized to compute the design parameters. The containment design stage can be performed in a distributed fashion. The practicability of the event-triggered scheme is studied by proving the Zeno behaviour exclusion. Numerical simulations quantify the effectiveness of the proposed EPiCC algorithm for a multi-agent systems.
Recent advancements in signal processing (SP) and machine learning, coupled with electronic medical record keeping in hospitals and the availability of extensive sets of medical images through internal/external communication systems, have resulted in a recent surge of interest in radiomics. Radiomics, an emerging and relatively new research field, refers to extracting semiquantitative and/or quantitative features from medical images with the goal of developing predictive and/or prognostic models. In the near future, it is expected to be a critical component for integrating image-derived information used for personalized treatment. The conventional radiomics workflow is typically based on extracting predesigned features (also referred to as handcrafted or engineered features) from a segmented region of interest (ROI). Nevertheless, recent advancements in deep learning have inspired trends toward deep-learning-based radiomics (DLRs) (also referred to as discovery radiomics). In addition to the advantages of these two approaches, there are also hybrid solutions that exploit the potential of multiple data sources. Considering the variety of approaches to radiomics, further improvements require a comprehensive and integrated sketch, which is the goal of this article. This article provides a unique interdisciplinary perspective on radiomics by discussing state-of-the-art SP solutions in the context of radiomics.
Pathological hand tremor (PHT) is among the most common movement symptoms of several neurological disorders including Parkinson's disease and essential tremor. Extracting PHT is of paramount importance in several engineering and clinical applications such as assistive and robotic rehabilitation technologies. In such systems, PHT is modeled as the input noise to the system and thus there is a surge of interest in estimation an compensation of the noise. Although various works in the literature have attempted to estimate and extract the PHT, in this letter, first, we argue that the ground truth signal used in existing works to optimize the performance of tremor extraction techniques is not accurate enough, and thus the performance measures for the prior techniques are not perfectly reliable. In addition, most of the existing tremor extraction techniques impose unrealistic assumptions, which are, typically, violated in practical settings. This letter proposes a novel technique that for the first time incorporates deep bidirectional recurrent neural networks as a processing tool for PHT extraction. Moreover, we devise an intuitively pleasing training strategy that enables the network to perform not only online estimation but also online prediction of the voluntary hand motion in a myopic fashion, which is currently a significantly important unmet need for rehabilitative and assistive robotic technologies designed for patients with pathological tremor.
Ageing critical infrastructures and valuable machineries together with recent catastrophic incidents such as the collapse of Morandi bridge calls for an urgent quest to design advanced and innovative data-driven solutions and efficiently incorporate multi-sensor streaming data sources for condition-based maintenance. Remaining Useful Life (RUL) is a crucial measure used in this regard within manufacturing and industrial systems, and its accurate estimation enables improved decision-making for operations and maintenance. Capitalizing on the recent success of multiple-model (also referred to as hybrid or mixture of experts) deep learning techniques, the paper proposes a hybrid deep neural network framework for RUL estimation, referred to as the Hybrid Deep Neural Network Model (HDNN). The proposed HDNN framework is the first hybrid deep neural network model designed for RUL estimation that integrates two deep learning models simultaneously and in a parallel fashion. More specifically, in contrary to the majority of existing data-driven prognostic approaches for RUL estimation, which are developed based on a single deep model and can hardly maintain good generalization performance across various prognostic scenarios, the proposed HDNN framework consists of two parallel paths (one LSTM and one CNN) followed by a fully connected multilayer fusion neural network which acts as the fusion centre combining the output of the two paths to form the target RUL. The HDNN uses the LSTM path to extract temporal features while simultaneously the CNN is utilized to extract spatial features. The proposed HDNN framework is tested on the NASA commercial modular aero-propulsion system simulation (C-MAPSS) dataset. Our comprehensive experiments and comparisons with several recently proposed RUL estimation methodologies developed based on the same data-sets show that the proposed HDNN framework significantly outperforms all its counterparts in the complicated prognostic scenarios with increased number of operating conditions and fault modes.
The volume, variety, and velocity of medical imaging data are exploding, making it impractical for clinicians to properly utilize such available information resources in an efficient fashion. At the same time, the interpretation of such a large amount of medical imaging data by humans is significantly error prone, reducing the possibility of extracting informative data. The ability to process such large amounts of data promises to decipher encrypted information within medical images, develop predictive and prognosis models to design personalized diagnosis, allow comprehensive study of tumor phenotype, and allow the assessment of tissue heterogeneity for diagnosis of different types of cancers.
Pathological Hand Tremor (PHT) is among common symptoms of several neurological movement disorders, which can significantly degrade quality of life of affected individuals. Beside pharmaceutical and surgical therapies, mechatronic technologies have been utilized to control PHTs. Most of these technologies function based on estimation, extraction, and characterization of tremor movement signals. Real-time extraction of tremor signal is of paramount importance because of its application in assistive and rehabilitative devices. In this paper, we propose a novel on-line adaptive method which can adjust the hyper-parameters of the filter to the variable characteristics of the tremor. The proposed technique (i.e., WAKE) is composed of a new adaptive Kalman filter and a wavelet transform core to provide indirect prediction of the tremor, one sample ahead of time, to be used for its suppression. In this paper, the design, implementation and evaluation of WAKE are given. The performance is evaluated on two different datasets. One dataset is recorded from patients with PHTs and the other one is a synthetic dataset, developed in this work, that simulates hand tremor under ten different conditions. The results demonstrate a significant improvement in the estimation accuracy in comparison with two well regarded techniques in literature.
The paper proposes a novel performance guaranteed sampled-data event-triggered consensus (PSEC) algorithm for linear multi-agent systems configured as directed networks. To reduce information exchanges and preserve communication resources, a sampled-data event detector is incorporated at each agent. Communication between agents is based on the fulfillment of distributed state-dependent event-triggering conditions. PSEC ensures a guaranteed exponential convergence rate and is resilient to norm-bounded uncertainties in control gains resulting from implementation distortions. The Lyapunov–Krasovskii theorem is used to incorporate the performance objectives. The design parameters, namely, the heterogeneous control gains and a transmission threshold, are simultaneously computed using a constrained convex optimization framework with linear matrix inequalities. Numerical simulations based on an experimental spacecraft formation flying multi-agent system quantify the effectiveness of the proposed PSEC approach.
With the evolution of phasor measurement units (PMUs) and the proposition to incorporate a large number of PMUs in future smart grids, it is critical to identify and prevent potential (zero-day) cyber attacks on phasor signals. The PMUs are the forefront of sensor technologies used in the smart grid and produce phasor voltage and current readings, which are complex-valued in nature. In this regard, the paper investigates potential attacks on complex-valued PMU signals and proposes the new paradigm of data-injection attacks, referred to as non-circular attacks. Existing state estimation algorithms and attack monitoring solutions assume that the PMU observations have statistical characteristics similar to that of real-valued signals. This assumption makes PMUs extremely defenseless against the proposed non-circular attacks. In this paper, we: (i) Introduce the non-circular attack model; (ii) Evaluate (both analytically and via experiments) the potential destructive nature of such attacks; (iii) Propose a Bhattacharyya distance (BD)- detector for monitoring the system against cyber attacks by transforming the detection problem to an equivalent problem of comparing innovation sequences in distribution via statistical distance measures, and; (iv) Propose a circularization approach, which enables the conventional detection algorithms to identify non-circular attacks.
This letter proposes a novel ternary-event-based particle filtering (TEB-PF) framework by introducing the ternary-event-triggering mechanism coupled with a non-Gaussian fusion strategy that jointly incorporates point-valued, quantized, and set-valued measurements. In contrast to the existing binary-event-triggering solutions, the TEB-PF is a distributed state estimation architecture where the remote sensor communicates its measurements to the estimator, residing at the fusion centre, in a ternary-event-based fashion, i.e., holds on to its observation during idle epochs, transfers quantized ones during the transitional epochs, and only communicates raw observations during event epochs. Due to joint utilization of quantized and set-valued measurements in addition to the point-valued ones, the proposed TEB-PF simultaneously reduces the communication overhead, in comparison to its binary triggering counterparts, while also improving the estimation accuracy specially in low communication rates.
The paper proposes a distributed framework for Collaborative, Event-triggered, Average consensus, Sampled data (CEASE) algorithms for multiagent systems with two classes of performance guarantees. Referred to as the E-CEASE algorithm, the first approach ensures an exponential rate of convergence and derives associated conditions and optimal design parameters using the Lyapunov-Krasovskii stability theorem. The second approach provides a structured trade-off between the number of transmissions and rate of consensus convergence based on a guaranteed cost and is referred to as G-CEASE. The two implementations of CEASE are event-driven in the sense that agents transmit within their respective neighbourhoods only on the triggering of an event. To reduce communication and processing, the triggering condition in CEASE is monitored at discrete-time steps. Both E-CEASE and G-CEASE support switching topologies in multi-agent systems. Monte-Carlo simulations on randomized networks quantify the effectiveness of the proposed approaches.
Motivated by the promising emergence of Brain Computer Interfaces (BCIs) within assistive/rehabilitative systems for therapeutic applications, the paper proposes a novel Bayesian framework that simultaneously optimizes a number subject-specific filter banks and spatial filters. Referred to as the ECCSP framework, optimized double band spectra-spatial filters are derived based on Common Spatial Patterns (CSP) coupled with the error correcting output coding (ECOC) classifiers. The proposed ECCSP framework constructs optimized subject-specific spectral filters in an intuitive fashion resulting in creation of significantly discriminant features, which is a crucial requirement for any EEG-based BCI system. Through incorporation of the ECOC approach, the classification problem is then modeled as communication over a noisy channel where the misclassification error is corrected by error correction techniques borrowed from information theory. The paper also proposes a modified version of the ECOC adopted to EEG classification problems by deploying ternary class codewords to account for ambiguous EEG epochs (which is a common phenomenon due to the cocktail-party nature of brain signals). The proposed ECCSP framework and its variants are evaluated over two different datasets from the BCI Competition (i.e., BCIC-IV2a and BCIC-IV2b). The results indicate that the proposed approach significantly outperforms its state-of-the-art counterparts and introduces a robust framework for motor imagery studies.
The papers in this special section address distributed signal processing applications that support security and privacy in networked cyber-physical systems. Networked cyber-physical systems (CPSs) are engineering systems with integrated computational and communication capabilities that interact with humans through cyber space. The CPSs have recently emerged in several practical applications of significant engineering importance including aerospace, industrial/manufacturing process control, multimedia networks, transportation systems, power grids, and medical systems. The CPSs typically consist of both wireless and wired sensor/agent networks with different capacity/reliability levels where the emphasis is on real-time operations, and performing distributed, secure, and optimal sensing/processing is the key concern. To satisfy these requirements of the CPSs, it is of paramount importance to design innovative “Signal Processing” tools to provide unprecedented performance and resource utilization efficiency.
This paper is motivated by recent advancements of cyber-physical systems and significance of managing limited communication resources in their applications. We propose an open-loop estimation strategy with an information-based triggering mechanism coupled with an adaptive event-based fusion framework. In the open-loop topology considered in this paper, a sensor transfers its measurements to a remote estimator only in occurrence of specific events (asynchronously). Each event is identified using a local stochastic triggering mechanism without incorporation of a feedback from the remote estimator and/or implementation of a local filter at the sensor level. We propose a particular stochastic triggering criterion based on the projection of local observation into the state-space, which in turn is a measure of the achievable gain in the local information state vector. Then, we investigate an unsupervised fusion model at the estimation side where the estimator blindly listens to its communication channel without having a priori information of the triggering mechanism of the sensor. An update mechanism with a Bayesian collapsing strategy is proposed to adaptively form state estimates at the estimator side in an unsupervised fashion. The estimator is adaptive in the sense that it is able to distinguish between having received an actual measurement or noise. The simulation results show that the proposed information-based triggering mechanism significantly outperforms its counterparts specifically in low communication rates, and confirms the effectiveness of the proposed unsupervised fusion methodology.
Cyber-physical systems have recently emerged in several practical engineering applications where security and privacy are of paramount importance. This motivated the paper and a recent surge of interest in development of innovative and novel anomaly and intrusion detection technologies. This paper proposes a novel distributed blind intrusion detection framework by modeling sensor measurements as the target graph-signal and utilizing the statistical properties of the graph-signal for intrusion detection. To fully take into account the underlying network structure, the graph similarity matrix is constructed using both the data measured by the sensors and sensors' proximity resulting in a data-adaptive and structure-aware monitoring solution. In the proposed supervised detection framework, the magnitude of the captured data is modeled by Gaussian Markov random field and the corresponding precision matrix is estimated by learning a graph Laplacian matrix from sensor measurements adaptively. The proposed intrusion detection methodology is designed based on a modified Bayesian likelihood ratio test and the closed-form expressions are derived for the test statistic. Finally, temporal analysis of the network behavior is established by computing the Bhattacharyya distance between the measurement distributions at the consecutive time instants. Experiments are conducted to evaluate the performance of the proposed method and to compare it with that of the state-of-the-art methods. The results show that the proposed intrusion detection framework provides a detection performance superior to those provided by the other existing schemes.
In this brief, motivated by the recent advances in graph signal processing, we address the problem of image abstraction and stylization. A novel unified graph-based multi-layer framework is proposed to perform iterative filtering without requiring any weight updates. The proposed graph-based filtering approach is shown to be superior to other existing methods due to iteratively using the filtered Laplacian in order to enhance the smoothened image signal at each layer. In order to render real images into painterly style ones and create a simple stylized format from color images, the low-contrast regions of an image are first smoothened using the proposed iterative graph filters in either vertex or spectral domains. The abstracted image is then quantized and sharpened using the proposed iterative highpass graph filter. The effectiveness of the graph-based image stylization method is verified through several experiments. It is shown that the proposed method can yield significantly improved visual quality for stylized images as compared to other existing methods.
Motivated by rapid growth of cyberphysical systems (CPSs) and the necessity to provide secure state estimates against potential data injection attacks in their application domains, the paper proposes a secure and innovative attack detection and isolation fusion framework. The proposed multisensor fusion framework provides secure state estimates by using ideas from interactive multiple models (IMM) combined with a novel fuzzy-based attack detection/isolation mechanism. The IMM filter is used to adjust the system’s uncertainty adaptively via model probabilities by using a hybrid state model consisting of two behaviour modes, one corresponding to the ideal scenario and one associated with the attack behaviour mode. The state chi-square test is then incorporated through the proposed fuzzy-based fusion framework to detect and isolate potential data injection attacks. In other words, the validation probability of each sensor is calculated based on the value of the chi-square test. Finally, by incorporation of the validation probability of each sensor, the weights of its associated subsystem are computed. To be concrete, an integrated navigation system is simulated with three types of attacks ranging from a constant bias attack to a non-Gaussian stochastic attack to evaluate the proposed attack detection and isolation fusion framework.
Motivated by application of complex-valued signal processing techniques in statistical pattern recognition, classification, and Gaussian mixture (GM) modeling, this paper derives analytical expressions for computing the Bhattacharyya coefficient/distance (BC/BD) between two improper complex-valued Gaussian distributions. The BC/BD is one of the most widely used statistical measures for evaluating class separability in classification problems, feature extraction in pattern recognition, and for GM reduction (GMR) purposes. The BC provides an upper bound on the Bayes error, which is commonly known as the best criterion to evaluate feature sets. Although the computation of the BC/BD between real-valued signals is a well-known result, it has not yet been extended to the case of improper complex-valued Gaussian densities. This paper addresses this gap. We analyze the role of the pseudocovariance matrix, which characterizes the noncircularity of the signal, and show that it carries critical second-order statistical information for computing the BC/BD. We derive upper and lower bounds on the BD in terms of the eigenvalues of the covariance and pseudocovariance matrices of the underlying densities. The theoretical bounds are then used to introduce the concept of β -dominance in the context of statistical distance measures. The BC is pseudometric, since it fails to satisfy the triangle inequality. Using the Matusita distance (a full-metric variant of the BC), we propose an intuitively pleasing indirect distance measure for comparing two general GMs. Finally, we investigate the application of the proposed BC/BD measures for GMR purposes and develop two BC-based GMR algorithms.
Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.
The paper considers the problem of estimating the state of a complex-valued stochastic hybrid system observed distributively using an agent/sensor network (AN/SN) with complex-valued (possibly noncircular) observations. In several distributed estimation problems, a suitable model to describe the underlying system is unknown a priori, i.e., distributed state estimation with structural uncertainty. Motivated by application of widely linear processing techniques in such problems, the paper proposes a class of distributed multiple-model adaptive estimation algorithms, referred to as the CD/MMAE. By incorporating the particular structure of the complex-valued observations on the second moment, first we develop two hierarchical CD/MMAE implementations and then use them as the building blocks and develop a diffusion-based hybrid estimator for decentralized estimation without incorporation of a fusion centre. The paper derives a new form of the adaptation law and a new form of information fusion, which takes advantage of the full second-order statistical properties of the underlying observations. Convergence properties of the proposed diffusion-based CD/MMAE are then investigated. We show that the adaptive weight of all local nodes converges to the true mode with probability one. Simulation results indicate that the proposed hybrid estimators provide improved performance and convergence properties over their traditional counterparts.
The letter considers a multi-sensor state estimation problem configured in a decentralized architecture where local complex statistics are communicated to the central processing unit for fusion instead of the raw observations. Naive adaptation of the augmented complex statistics to develop a decentralized state estimation algorithm results in increased local computations, and introduces extensive communication overhead, making it practically unattractive. The letter proposes a structure-induced complex Kalman filter framework with reduced communication overhead. In order to further reduce the local computations, the letter proposes a non-circularity criterion which allows each node to examine the non-circularity of its local observations. A local sensor node disregards its extra second-order statistical information when the non-circularity coefficient is small. In cases where the local observations are highly non-circular, an intuitively pleasing circularization approach is proposed to avoid computation and communication of the pseudo-covariance matrices. Simulation results indicate that the proposed structured-induced complex Kalman filter (SCKF) provides significant performance improvements over its traditional counterparts.
Motivated by the problem of estimating the discrete and continuous states of an improper complex-valued stochastic hybrid system, the paper proposes a class of widely linear (augmented) multiple model adaptive estimation algorithms, referred to as the C/MMAE. We show that for an improper complex-valued signal, pseudo-covariance of the innovation sequence is not zero and, therefore, carries useful statistical information regarding the unknown behaviour mode of the hybrid system. A new Bayesian law is, therefore, derived as a function of the pseudo-covariance of the innovation sequence and used to compute the probability that a hypothesized model is in effect at a certain time. We show that the C/MMAE, which uses the new Bayesian law and utilizes the complete second-order statistical characterization of the complex-valued innovation sequence, convergencies faster than its counterpart, which only uses the conventional covariance of the innovation sequence. In order to simplify the computational complexity, we develop two circularized versions of the C/MMAE using a preprocessing step, referred to as the circularizing filter (CF). The CF is incorporated to convert the improper observations/innovations into the proper ones in order to reduce the computational complexity of the hypothesis testing step. Finally, an interacting version of the C/MMAE, referred to as C/IMM, is developed for improper complex-valued systems with Markovian switching coefficients. Simulation results indicate that the proposed hybrid estimators provide improved performance and convergence properties over their traditional counterparts.
Motivated by recent evolution of cutting-edge sensor technologies with complex-valued measurements, this chapter analyzes attack models and diagnostic solutions for monitoring industrial control systems against complex-valued cyber attacks. By capitalizing on the knowledge that the existing detection and closed loop estimation algorithms ignore the full second-order statistical properties of the received measurements, we show that an adversary can attack the system by maximizing the correlations between the real and imaginary parts of the reported measurements. Consequently, the adversary can pass the conventional attack detection methodologies and change the underlying system beyond repair.
In ths chapter, the first section surveys recent developments in secure closed-loop state estimation methodologies, and then reviews the fundamentals of complex-valued signals and their applications. Second section highlights the drawbacks of~the state-of-the-art estimation methodologies and illustrates their vulnerability to cyber attacks. In the third section, first we review the existing attack models and then introduce the non-circular attack model. The forth section first surveys the state-of-the-art attack detection diagnostics and shows how to transform cyber-attack detection problem into the problem of comparing statistical distance measures between probability distributions. Fifth section provides illustrative examples followed by future research directions and conclusions.
The chapter proposes three consensus-based, distributed implementations of the particle filter for non-linear state estimation problems with non-Gaussian excitation. Our approaches range from a simple but still intuitive approach, referred to as the global likelihood constrained implementation of the particle filter (GLC/DPF), included to illustrate the underlying concepts to near-optimal distributed approaches. The unscented, consensusbased, distributed implementation of the particle filter (UCD/DPF) is the second approach that couples the unscented Kalman filter (UKF) with the particle filter such that the UKF estimates the Gaussian approximation of the proposal distribution, which is then used as the proposal distribution in the particle filter. The UCD/DPF requires each node to wait until consensus is reached before running the next iteration of the particle filter and is suitable for networks where communication is relatively inexpensive as compared to sensing. The third approach is the global channel filter based distributed particle filter (GCF/DPF), which does not require the consensus algorithms to converge between two consecutive observations. These approaches are successfully tested by running simulations of bearings-only tracking (BOT) applications of moving targets arising in radar surveillance, underwater submarine tracking, and robotics.
I would be happy to talk to you if you need my assistance in your research or would like to collaborate on potential research projects. Please feel free to contact me using the contact information on the right. I would also be happy to meet you in person at my office, please drop me an e-mail to arrange a meeting time. My office information is as follows:
1515 Rue Ste-Catherine, Montreal, QC, Canada, H3G-2W1.
My office is located in EV009.187