Institute of Information Science and Technologies
Institute
The Institute of Information Science and Technologies (ISTI) is active on privacy and security themes. Currently, 6 out of its 13 thematic laboratories are actively contributing to those themes with high-profile scientific activities and are playing an active role in the development of cybersecurity solutions.
In the Cybersecurity context, the activity of the Institute is focused on the security and privacy aspects and aims at increasing knowledge, developing and testing new ideas and widening the application areas of research.
Additionally, ISTI is actively involved in the technology transfer of the achieved results in different application domains such as, but not limited to: apps and mobile, Smart cities, CPSoS, Cloud/Edge Continuum, Indoor localization systems, Social distancing, Vehicular network, Automotive, Railways, Health care, Robotics, the GDPR, Mailing systems and Smart grids.
The units involved are the followings:
- Wireless Networks (WN)
https://www.isti.cnr.it/research/laboratories/27/Wireless_Networks_WN
- Formal Methods and Tools (FMT)
https://www.isti.cnr.it/research/laboratories/4/Formal_Methods_and_Tools_FMT
- Software Engineering & Dependable Computing (SEDC)
https://www.isti.cnr.it/research/laboratories/20/Software_Engineering_&_Dependable_Computing_SEDC
- Artificial Intelligence for Media and Humanities (AIMH)
- Knowledge Discovery and Data Mining (KDD)
https://www.isti.cnr.it/research/laboratories/9/Knowledge_Discovery_and_Data_Mining_KDD
- High Performance Computing (HPC)
https://www.isti.cnr.it/research/laboratories/8/High_Performance_Computing_HPC
Personnel
The main researchers involved are the followings:
Paolo Barsocchi
Antonello Calabrò
Silvano Chiaradonna
Antonino Crivello
Said Daoudagh
Felicita Di Giandomenico
Fabrizio Falchi
Francesco Furfari
Claudio Gennaro
Michele Girolami
Francesca Lonetti
Eda Marchetti
Giulio Masetti
Maurice H. ter Beek
Claudio Vairo
Expertize
- Data protection and compliance with legal frameworks
- Access and usage control specification and assessment
- Monitoring security and privacy in smart environment and dynamic systems
- Trust and security assurance
- Critical infrastructure resilience
- Analysis of interdependencies and energy consumption
- Security and privacy analysis and testing
- Quantitative security modeling and analysis of attack scenarios
- Formal modelling and verification of risk quantification
- Face recognition
- Adversarial examples detection
- Smart cameras
- Deep Fake Video Detection
- Visual Anomalies Detection
- Edge Computing for Distributed Privacy Aware Learning
- Physical Assets and Resource Monitoring in CPS and CPSoS
- Anomaly detection
- Education and training
- Intrusion tolerance
- Trustworthy and dependable AI
- Intelligent IoT and Cyberphysical systems
Projects
Cyber Security Network of Competence Centres for Europe
da Febbraio 2019 a 31 Luglio 2022
CyberSec4Europe is a research-based consortium with 44 participants covering 21 EU Member States and Associated Countries. It has received more than 40 support letters and promises of cooperation from public administrations, international organisations, and key associations worldwide including Europe (such as ECSO), Asia, and North America. As pilot for a Cybersecurity Competence Network, it will test and demonstrate potential governance structures for the network of competence centres using the best practices examples from the expertise and experience of the participants, including concepts like CERN. CyberSec4Europe will support addressing key EU Directives and Regulations, such as the GDPR, PSD2, eIDAS, and ePrivacy, and help to implement the EU Cybersecurity Act including, but not limited to supporting the development of the European skills-base, the certification framework and ENISA’s role.
Building Trust in Ecosystems and Ecosystem Components
da September 2020 a 31 August 2023
Nowadays most of the ICT solutions developed by companies require the integration or collaboration with other ICT components, which are typically developed by third parties. Even though this kind of procedures are key in order to maintain productivity and competitiveness, the fragmentation of the supply chain can pose a high risk regarding security, as in most of the cases there is no way to verify if these other solutions have vulnerabilities or if they have been built taking into account the best security practices.
In order to deal with these issues, it is important that companies make a change on their mindset, assuming an “untrusted by default” position. According to a recent study only 29% of IT businesses know that their ecosystem partners are compliant and resilient with regard to security. However, cybersecurity attacks have a high economic impact and it is not enough to rely only on trust. ICT components need to be able to provide verificable guarantees regarding their security and privacy properties. It is also imperative to detect vulnerabilities more accurately from ICT components and understand how they can propagate over the supply chain and impact on ICT ecosystems. However, it is well known that most of the vulnerabilities can remain undetected for years, so it is necessary to provide advanced tools for guaranteeing resilience and also better mitigation strategies, as cybersecurity incidents will happen. Finally, it is necessary to expand the horizons of the current risk assessment and auditing processes, taking into account a much wider threat landscape. BIECO is a holistic framework that will provide these mechanisms in order to help companies to understand and manage the cybersecurity risks and threats they are subject to when they become part of the ICT supply chain. The framework, composed of a set of tools and methodologies, will address the challenges related to vulnerability management, resilience, and auditing of complex systems.
Being safe around collaborative and versatile robots in shared spaces
da Gennaio 2018 a 31 Dicembre 2021
Increasing demand from the growing and aging population can be assuaged by ever closer safe human-robot collaboration (HRI): to improve productivity, reduce health limitations and provide services. HRI and safety are both major topics in the Work Programme. Safety regulations will be a barrier to cobot deployment unless they are easy to access, understand and apply. COVR collates existing safety regulations relating to cobots in e.g. manufacturing and fills in regulatory gaps for newer cobot fields, e.g., rehabilitation to present detailed safety assessment instructions to coboteers. Making the safety assessment process clearer allows cobots to be used with more confidence in more situations, increasing the variety of cobots on the market and the variety of services cobots can offer to the general population. TRYG provides a one-stop shop which uses a common approach to safety assessment and is valid across all fields and applications. TRYG will provide clear and simple online access to best-practice safety testing protocols via a user-friendly decision tree, guided by questions about the cobot and its intended behaviours. Resulting application-specific testing protocols specify how to assess safety and document compliance with regulations. We support coboteers by providing safety-relevant services based in well-equipped facilities at each partner site. TRYG services cover all stages of cobot development from design through final system sign-off to safety in use and maintenance, provided through consultancy, risk analysis, actual testing, workshops, courses, demonstrations, etc. – all designed to inspire people to increase cobot safety. All TRYG elements will be beta-tested by external cobot developers etc. financed by FSTP. By using project elements “live”, these FSTP beneficiaries not only develop their cobots further towards the market, but also contribute their knowledge to the TRYG system and provide valuable feedback to both partners and standards developers.
Adaptive edge/cloud compute and network to support nextgen applications
da gennaio 2020 a dicembre 2022
Edge computing is going to play a dominant role in the forthcoming technology developments, disrupting economies at a large scale. ACCORDION aims at supporting the distributed and localized nature of Edge Computing as a counterbalance for big IT trusts. Exploiting the synergy with upcoming technologies such as 5G can provide an opportunity for the EU to capitalize on its local resources and infrastructure. To this end, ACCORDION will pursue an opportunistic approach in bringing together edge resource/infrastructures (public clouds, on-premise infrastructures, telco resources, even end-devices) in pools defined in terms of latency, that can support NextGen application requirements. ACCORDION will intelligently orchestrate the compute & network continuum formed between edge and public clouds, using the latter as a capacitor. Deployment decisions will be taken also based on privacy, security, cost, time and resource type criteria. The adoption rate of novel technological concepts from the SMEs will be tackled through an application framework, leveraging DevOps and SecOps in order to facilitate the transition of existing applications to the ACCORDION platform. With a strong emphasis on European edge computing efforts (MEC, OSM) and 3 highly anticipated NextGen applications on collaborative VR, multiplayer mobile- and cloud-gaming, brought by the involved end users, ACCORDION is expecting to radically impact the application development and deployment landscape, also directing part of the related revenue from non-EU vendors to EU-local infrastructure and application providers.
A computing toolkit for building efficient autonomous applications leveraging Humanistic Intelligence
da gennaio 2020 a dicembre 2022
The TEACHING project designs a computing platform and the associated software toolkit supporting the development and deployment of autonomous, adaptive and dependable Cyber/physical Systems-of-systems (CPSoS) applications, allowing them to exploit a sustainable human feedback to drive, optimize and personalize the provisioning of their services. The project revolves around four pillars.
(1) The creation of a distributed edge-oriented and federated computational environment that seamlessly integrates heterogeneous resources comprising specialized edge devices, general purpose nodes and cloud resources. A core concept is the exploitation of edge devices with specialized hardware support to run AI, cybersecurity and dependability components of the autonomous application.
(2)The development of methods and tools that support runtime dependability assurance of CPSoS. To that aim also systematic engineering processes will be developed and used for the design of conventional and AI-based runtime adaptive systems, to be applied both in the cloud and at the edge in order to ensure continuous CPSoS assurance at runtime and throughout the software life cycle (this including AI approaches tailored towards a cognitive security framework).
(3) The realization of software-level abstraction of the computing system to allow an easy and coordinated deployment of the different application components on the most adequate CPSoS resources. This concept also involves the orchestration of application components to optimize resource efficiency and energy consumption and meet the dependability requirements of the application.
(4) The leveraging of a synergistic human-CPSoS cooperation in the spirit of Humanistic Intelligence, exploiting AI methodologies and continuous monitoring of the human physiological, emotional and cognitive (PEC) state to enable applications with unprecedented levels of autonomy and flexibility, while retaining the dependability requested by any safety-critical system operating with a human in the loop.
Initiatives/assets
- Ongoing collaboration between Maurice ter Beek (ISTI-CNR, Pisa, Italy), Axel Legay (UCLouvain, Belgium), Alberto Lluch Lafuente (Technical University of Denmark), and Andrea Vandin (main developer, Sant’Anna, Pisa, Italy).
- C3T – Centro di Competenza in Cybersecurity Toscano
The Tuscan Cybersecurity Competence Center (C3T) carries out research and technology transfer activities in the field of information security with the aim of informing, raising awareness and responding to the needs of small and medium-sized enterprises, public bodies and professionals on how to know, understand and react to cyber security threats.
- Pervasive AI Lab ( http://pai.di.unipi.it/ )
PAILab is a joint initiative by University of Pisa and Consiglio Nazionale delle Ricerche (CNR) aimed at pursuing research at the crossroads of Artificial Intelligence, Cloud/Edge/IoT, HPC, Machine Learning and Pervasive computing. PAILab is active in the design, development and coordination of European and national projects (including ACCORDION and TEACHING) as well as industry-funded grants. Key lab research themes are: AI-on-Cloud, AI-for-the-Cloud, AIaaS; Cybersecurity in ML; Intelligent IoT and cyberphysical systems; Methods, algorithms and systems for human-aware, secure and safe AI in pervasive computing scenarios; Distributed Learning, Federated Learning, Learning at-the-edge, multi-agent systems and pervasive computing; Trustworthy AI.
Assets
RisQFLan is a software tool for the modeling and analysis of threat and risk scenarios. The tool supports a generalization of Attack-Defense Trees enriched with attacker behavior and quantitative constraints.
- XACMET: XACML Modeling & Testing
XACMET is a generator of XACML requests as well as an automated model-based oracle.
The main features of XACMET are:- definition of a typed graph, called the XAC-Graph, that models the XACML policy evaluation;
- derivation of a set of test requests via full-path coverage of this graph;
- automatic derivation of the expected verdict of a specific request execution by executing the corresponding path in such graph;
- measuring coverage assessment of a given test suite.
- X-CREATE: XaCml REquests derivAtion for TEsting
X-CREATE (XaCml REquests derivAtion for TEsting) is a tool for the automated derivation of a test suite starting from an XACML policy. X-CREATE implements different strategies for deriving XACML requests. The aim of the derived XACML requests is twofold: testing of policy evaluation engines and testing of access control policies.
- XACMUT: XACML 2.0 Mutants Generator
XACMUT (XACml MUTation) is a tool for the generation of XACML 2.0 mutants. It generates the set of mutants, provides facilities to run a given test suite on the mutants set and computes the test suite effectiveness in terms of mutation score. The tool includes and enhances the mutation operators of existing security policy mutation approaches.The framework also provides support to locate the elements involved in the policy under test that are the causes of detected inconsistencies.
- SIMTAC: SIMilarity Testing for Access-Control
SIMTAC adapts similarity-based prioritization to order XACML test cases. To do this, we need to capture and specify what is a suitable notion of distance between XACML requests. To the best of our knowledge, the approach implemented in SIMTAC is the first attempt to introduce a prioritization strategy in XACML access control systems - TXPAINT: Testing XACML Policy Against INTentions
TXPAINT is a generic framework for testing the compliance of an XACML policy to intended access rights or discovering possible inconsistencies. It adopts two well-known testing techniques, i.e., combinatorial and mutation testing, and provides support for generating appropriate test inputs (i.e., requests of access) able to test the constraints, permissions and prohibitions defined in the policy
- GENERAL_D: Gdpr-based ENforcEment of peRsonAL Data
GENERAL_D provides a systematic process for automatically deriving, testing and enforcing Access Control Policies and Access Control (AC) systems in line with the GDPR. Its data protection by-design solution promotes the adoption of AC systems ruled by policies systematically designed for expressing GDPR’s provisions. Specifically, the main contributions of GENERAL_D are:
- The definition of an Access Control Development Life Cycle for analysing, designing, testing and implementing AC mechanisms (systems and policies) able to guarantee the compliance with the GDPR.
- The realization of a reference architecture allowing the automatic application of the proposed Life Cycle.
GROOT is a general combinatorial strategy for testing systems managing GDPR’s concepts (e.g., Data Subject, Personal Data or Controller).
GRADUATION is a tool based on a generic methodology for assessing the fault detection effectiveness of GDPR-based testing strategies by means of mutation testing. In particular, GRADUATION provides a set of mutation operators specifically based on a GDPR-based fault model.
XMF is a comprehensive framework allowing test case generation, execution and assessment, and mutants generation in the context of XACML-based access control systems.
DOXAT is a framework useful both for continuous planning and testing of the PDP inside the DevOps process.