MARVEL
Assets
MARVEL Assets
the main aspects of the list of MARVEL Assets
MARVEL is a comprehensive framework consisting of 35 innovative technologies designed to help cities become smarter, more sustainable, and more efficient. Our software/hardware developments, methodologies, and services provide tangible outcomes such as sensors and software solutions, help formalize user requirements into technological requirements, and aim to improve the effective use of MARVEL solutions.
Our technologies are designed to be validated in industrial-relevant smart city environments, such as in Malta, Trento, and Novi Sad. With a Technology Readiness Level (TRL) of 5-6 on average, you can be sure that MARVEL is the perfect solution to meet your smart city needs.
Contact us today to learn more about our innovative solutions and start your smart city journey with MARVEL.
MARVEL Assets: Advancing Smart Cities with extreme-Scale Analytics
Welcome to MARVEL, a comprehensive framework for extreme-scale multi-modal AI-based analytics in smart cities environments. Our innovative solutions provide real-time decision-making and improve overall efficiency for cities and communities. Our solutions go beyond traditional Big Data, cloud-only or edge-only architectures, adopting the edge-fog-cloud Computing Continuum paradigm.
Beyond Big Data: Multimodal Intelligence for Real-Time Decision-Making
Our vision is to create smarter, more sustainable and livable cities by utilizing the power of big data analytics. We provide privacy-aware multimodal AI tools and methods, multimodal audio-visual data capture and processing, and advanced visualization techniques. We utilize federated learning, edge and fog ML/DL models, and extreme-scale multimodal analytics to provide comprehensive data-driven application workflows.
Empowering Citizens for a Smarter Future
Our goal is to provide (almost) real-time response to incidents happening at different locations within the city, improving public safety, well-being, care, and transportation. Our solutions have been developed according to the scope of MARVEL, a project classified as research and innovation. We prioritize the effective use of our solutions to provide in-depth customized assistance. We look forward to helping your city achieve its smart city goals. If you find one or more of our MARVEL Assets interesting, please contact us.
1
Software/Hardware Developments
Τhis category refers to tangible outcomes, namely sensors and software solutions.
2
Methodologies
Τhey help to formalise user requirements into technological requirements.
3
Services
The word “Service” refers to the traditional meaning of IT services. These are future services offered around MARVEL, which aim at improving the effective use of MARVEL solutions and at providing in-depth customised assistance.
- All Components
- Software / Hardware
- Service
- Methodology
Advanced MEMS microphones
Collect high-quality acoustic data for speech recognition, audio recording, and noise cancellation.
Advanced MEMS microphones
The Advanced MEMS microphones cater customer needs by gathering high quality, low noise acoustic data, which can be used for speech recognition, audio recording, and acquisition of surrounding noise for applications like noise cancellation.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : IFAG
Expected TRL : 8
Licensing : Open source License TBD
Video
Audio Tagging
AT component in MARVEL implements a state-of-the-art method for audio tagging, which is used to analyze continuous audio streams and recognize active sound classes in the stream.
Audio Tagging
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Service
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
AVDrone
System that classifies crowd behavior while ensuring user privacy.
AVDrone
The proposed system will enable target users to classify the crowd behaviour without breaking any privacy issues.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : UNS
Expected TRL : 5
Licensing : TBD
Video
AVRegistry
Store and access metadata of MARVEL Audio-Visual sources using a RESTful API compliant with Smart Data Models standard.
AVRegistry
The AV Registry stores metadata of MARVEL Audio-Visual sources in compliance with the Smart Data Models standard and exposes this information through a RESTful API to all MARVEL components that need to consume AV data.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
sensMiner
SensMiner Android app records and annotates environmental acoustics.
sensMiner
SensMiner is an Android app developed to record environmental acoustics as well as user annotations. While the audio is being recorded, the user can in parallel annotate it and store the corresponding segment in the phone memory.
SubSystem : Sensing and perception
Type of Component : Software
Owner : AUD
Expected TRL : 8
Licensing : Proprietary
EdgeSec-VPN
The EdgeSec VPN is a security and privacy solution that is part of the E2F2C framework developed in the MARVEL project. It uses peer-to-peer VPN technology to encrypt data transferred between MARVEL components, ensuring strict communication security.
EdgeSec-VPN
EdgeSec VPN brings security and privacy features to the complete E2F2C framework, developed within the MARVEL project. Based on the technology of peer-to-peer VPNs, it is used to encrypt any data that is transferred between the MARVEL components to meet the requirements of a strict communication security. All participating computing devices form a full mesh network where every device has a direct secure connection with every other device.
SubSystem : Sensing and perception
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Open source License TBD
Video
EdgeSec-TEE
EdgeSec TEE ensures confidential computing for Python apps that process sensitive user data using Trusted Execution Environments.
EdgeSec-TEE
EdgeSec TEE offers confidential computing for python applications that process sensitive user data and is based on the technology of Trusted Execution Environments. It guarantees that the code itself and the data that needs to be processed are located inside protected and isolated execution environments that enable confidentiality and integrity, even if processed in untrusted environments.
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Proprietary
Video
VideoAnony
VideoAnony component detects faces and number plates, and performs anonymization to simplify surveillance camera data analysis in public spaces.
VideoAnony
The VideoAnony component detects people’s faces and car number plates and performs anonymisation: by blurring in the initial phase and potentially by swapping faces at a later stage. The component should simplify the use of data recorded by surveillance cameras for monitoring and analysing public spaces.
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
Video
AudioAnony
AudioAnony component anonymizes audio streams by replacing speaker voices with preserved speech features and environmental background.
AudioAnony
The goal of this component is to anonymize audio stream by replacing the speaker voice with another one while preserving the other speech fearure and the environmental background. Operating close to the microphones on edge device, or at the beginning of the procesing pipeline, audioanony limits possible privacy issues in the collection and processing of audio data. The component can deployed on a low-end edge device connected to the microphones to provide an anonymized stream in a transparent way
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
Video
Data Fusion Bus
The Data Fusion Bus (DFB) allows efficient and trustworthy transfer of heterogeneous data between connected components and permanent storage.
Data Fusion Bus
The Data Fusion Bus (DFB) is a fully customiseable system that supports a trustworthy and efficient transfer of streamed heterogeneous data between multiple connected components and a permanent storage. Within MARVEL, the DFB aggregates AI inference results from all E2F2C layers to post-process, store and re-distribute them to SmartViz and Data Corpus both in real time and as archived data.
SubSystem : Security, privacy and data protection
Type of Component : Software / Service
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
StreamHandler
StreamHandler is a powerful distributed streaming platform based on Apache Kafka, designed for big data applications. It provides high performance, interoperability, resilience, scalability, and security.
StreamHandler
StreamHandler is a high-performance distributed streaming platform for handling real-time data, based on Apache Kafka. The solution is geared toward big data applications and offers interoperability, resilience, scalability and security. In MARVEL, StreamHandler offers its audiovisual hadling capabilities, providing persistence for data streamed by microphones and cameras of the project. This way, the user can consult aural and visual evidence for events and anomalies detected by MARVEL’s AI components, and act accordingly.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner : INTRA
Expected TRL : 8
Licensing : Proprietary
Video
DatAna
DatAna is a versatile platform that allows for scalable acquisition, transformation, and communication of streaming data across different layers of computing.
DatAna
DatAna is a scalable data acquisition, transformation and communication platform based on the Apache NiFi ecosystem. DatAna allows processing of streaming data at different layers (edge, fog and cloud) and their routing towards other layers of the computing continuum. In MARVEL, data from the AI inference models is validated, transformed to be compatible with the Smart Data Models initiative and subsequently transmittted to the upper layers for further processing and storage.
SubSystem : Data management and distribution
Type of Component : Software
Owner : ATOS
Expected TRL : 6
Licensing : Open source License TBD
Video
Hierarchical Data Distribution (HDD)
HDD (High-speed Distributed Data) is a software component that optimizes the topic partitioning process in Apache Kafka, a popular distributed streaming platform.
Hierarchical Data Distribution (HDD)
HDD considers the problem of Apache Kafka data topic partitioning optimisation. Even though Apache Kafka provides some out-of-the-box optimisations, it does not strictly define how each topic shall be efficiently distributed into partitions. HDD models the Apache Kafka topic partitioning process for a given topic, and, given the set of brokers, constraints and application requirements, solves the problem by using innovative heuristics.
SubSystem : Data management and distribution
Type of Component : Methodology
Owner : CNR
Expected TRL : 6
Licensing : MIT
Video
CATFlow
CATFLOW is a powerful tool for anonymizing road camera data and presenting it in a user-friendly dashboard.
CATFlow
CATFLOW transforms road camera information to anonymous data. Through a few clicks, our dashboards then present this data and provide critical insights and detailed reports identifying a range of mobility types, trajectories, and junction turning ratios.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : GNR
Expected TRL : 7
Licensing : Proprietary
Video
devAIce
devAIce SDK is a powerful software development kit designed to help customers implement advanced audio analytics on their local premises.
devAIce
devAIce SDK is a full-blown audio analysis SDK meant to enable customers to perform intelligent audio analytics on their local premises.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : AUD
Expected TRL : 9
Licensing : Proprietary
Visual Anomaly Detection
Visual Anomaly Detection is an advanced tool that utilizes cutting-edge deep learning models to detect abnormal events in video streams with high accuracy.
Visual Anomaly Detection
Visual Anomaly Detection implements efficient, state-of-the-art deep learning models to detect abnormal events in video streams. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and anotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
SED@Edge
SED@Edge is an innovative solution that enables the detection of urban acoustic events using deep learning models on low-cost, low-power microcontrollers with limited computational capabilities.
SED@Edge
SED@Edge implements state-of-the-art deep learning models for the detection of urban acoustic events on low-cost low-power microcontrollers with very limited computational capabilities. Operating on edge IoT devices located very close to the microphones, this solution brings a considerable energy and badnwith reduction with respect to cloud based solutions.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
VIdeo
Audio-Visual Anomaly Detection
Audio-Visual Anomaly Detection is a powerful component for detecting abnormal events in multimodal streams.
Audio-Visual Anomaly Detection
Audio-Visual Anomaly detection implements efficient, state-of-the-art deep learning models to detect abnormal events in multimodal streams. The multimodality of the AVAD places it on the cutting edge of research into the usage of multimodal data in the deep learning anomaly detection methods. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and annotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Visual Crowd Counting
The VCC uses computer vision and deep learning techniques to accurately count and classify vehicles passing through a given area.
Visual Crowd Counting
The VCC implements proven, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The VCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Audio-Visual Crowd Counting
The AVCC (Audio-Visual Crowd Counting) component is a solution for counting the number of people in a crowd from audio-visual data.
Audio-Visual Crowd Counting
The AVCC implements novel, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The AVCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Automated Audio Captioning
The MARVEL Dashboard provides an intuitive interface that allows users to visualize and interact with the data generated by audio captioning systems.
Automated Audio Captioning
AAC component implements a state-of-the-art method for automated audio captioning. Automated audio captioning is used in MARVEL to analyze continuous audio streams and to describe periodically the audio content with a textual description. These textual descriptions give a brief summary of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Service
Owner : TAU
Expected TRL : 3
Licensing : Open source License TBD
Video
Sound Event Detection
SED component implements a state-of-the-art method for sound event detection.
Sound Event Detection
SED component implements a state-of-the-art method for sound event detection. Sound event detection is used in the MARVEL to analyze continuous audio streams and to detect active sound events in the stream. Detected sound events give a detailed view of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
Sound Event Localisation and Detection
SELD component is a state-of-the-art method for sound event localization and detection.
Sound Event Localisation and Detection
SELD component implements a state-of-the-art method for sound event localization and detection. Sound event localization and detection is used in the MARVEL to analyze continuous audio streams, detect active sound events, and identify their location with respect to the audio capturing device. Detected and localized sound events give a detailed view of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
GPURegex
FORTH platform includes a real-time stream processing engine designed to handle large volumes of data in real-time.
GPURegex
FORTH offers a real-time high-speed pattern matching engine that leverages the parallelism properties of GPGPUs to accelerate the process of string and/or regular expression matching. It is offered as a C API and allows developers to build applications that require pattern matching capabilities while simplifying the offloading and acceleration of the workload by exploiting the available GPU(s).
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Proprietary
Video
DynHP
DynHP, short for Dynamic Headroom Management, is a software solution that optimizes the use of computing resources in edge devices that have limited capabilities.
DynHP
DynHP makes it possible to execute the interference and training steps of audio-video real-time analytics on devices with limited compute capabilities at the edge of the network, which otherwise cannot be used due to insufficient memory or energy or processing resources.
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Methodology
Owner : CNR
Expected TRL : 4
Licensing : MIT
Video
Federated learning
Federated learning is a distributed machine learning approach that allows multiple clients to collaboratively train a shared model while keeping their data private.
Federated learning
Federated learning offers an effective technique for model training with privacy protection and network bottleneck mitigation at the same time. As a result, such distributed approach has been widely accepted as an effective technique for addressing the problem of large, complex, and time- consuming training procedures, and is also particularly suited for edge computing.
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Service
Owner : UNS
Expected TRL : 5
Licensing : Apache 2.0
Video
MARVdash
MARVdash is a platform that provides a web-based environment for data science in Kubernetes-based environments.
MARVdash
SubSystem : Optimised E2F2C Processing and Deployment Type of Component : Service Owner : FORTH Expected TRL : 5 Licensing : Apache 2.0
Video
HPC Infrastracture
High-performance computing resources can greatly enhance the speed and efficiency of AI-based algorithms, especially when dealing with large amounts of data.
HPC Infrastracture
The HPC and cloud infrastructure in the MARVEL project is provided by PSNC, which offers access to world-class solutions in terms of high-performance computing. It includes various computing, storage and network resources that allow taking full advantage of the AI-based algorithms for AV analytics within the MARVEL framework.
SubSystem : E2F2C Infrastructure
Type of Component : Service
Owner : PSNC
Expected TRL : 9
Licensing : Proprietary
Video
Management and orchestration of HPC resources
PSNC's role as a technology partner in the MARVEL project includes providing technical support and tools for managing and orchestrating computing resources, including high-performance computing resources, as well as various storage services.
Management and orchestration of HPC resources
PSNC as a technology partner provides technical support and tools for managing and orchestrating computing resources and various class storage services connected with the HPC system.
SubSystem : E2F2C Infrastructure
Type of Component : Service
Owner : PSNC
Expected TRL : 9
Licensing : Proprietary
SmartViz
SmartViz is a data visualization solution that aims to help domain experts understand complex data sets.
SmartViz
SmartViz is a cutting-edge data visualization solution that empowers domain experts to make sense of complex data sets. With its intuitive and user-friendly interface, SmartViz makes it easy for data practitioners to collect, analyze, and visualize data in real time. Whether you are working with big data or real-time data sources, SmartViz has got you covered. With its advanced temporal representations, AI-assisted insights, and customizable dashboards, SmartViz is the perfect solution for businesses looking to streamline their data analysis processes and make data-driven decisions quickly and effectively. Whether you need to monitor data in real time or conduct exploratory data analysis, SmartViz has the features you need to get the job done.
SubSystem : System outputs
Type of Component : Software
Owner : ZELUS
Expected TRL : 6
Licensing : Apache 2.0
MARVEL Data Corpus-as-a-Service
MARVEL Data Corpus has the potential to create significant impact on both SMEs and the international scientific and research community. By providing access to these data assets, SMEs and startups can test and build innovative applications, potentially creating new business opportunities.
MARVEL Data Corpus-as-a-Service
MARVEL Data Corpus will give the possibility for SMEs and start-ups to test and build on top of these data assets their innovative applications, thus creating new business by exploring extreme-scale multimodal analytics. In addition, by adopting an SLA enabled Big Data analytics framework, it is expected to maximise the impact that the MARVEL corpus will have on the international scientific and research community
SubSystem : System outputs
Type of Component : Service
Owner : STS
Expected TRL : 5
Licensing : Public
Video
SubSystem: Audio, Visual and Multimodal AI SubSystem
RBAD, a Python-based, logic-driven anomaly detector, uses predefined rules and data from CATFlow to alert on specific anomalies like jaywalking, off-schedule buses, rush-hour heavy vehicles, and pavement bikers.
SubSystem: Audio, Visual and Multimodal AI SubSystem
The RBAD is a lightweight, logic-based anomaly detector implemented in Python. It employs a predefined ruleset to detect very specific anomalies, based on the input messages coming from the CATFlow component. The input messages contain information about the objects detected in the video frame, as well as their location and time of detection. Based on this information, combined with the predefined rules, RBAD is able to create a specific anomaly alert. RBAD has been set up to detect the following situations: pedestrians jaywalking, buses arriving not on schedule, heavy-weight vehicles presence during rush hours, and bikers using the pavement.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
MARVEL Assets Demo Series - YOLO SED Demo
YOLO-SED component combines YOLO visual and SED audio analyses for anomaly detection in audiovisual data, enhancing accuracy and sending alerts via an MQTT broker.
MARVEL Assets Demo Series - YOLO SED Demo
Description: The YOLO-SED component is aimed at analyzing audiovisual data on the edge and detecting anomalies using the YOLO object detector and SED audio analysis module. The outputs of these modules are fused together, and the anomaly prediction is sent to an MQTT broker. The SED subsystem detects sound event activity from the given audio segment to enhance visual detections of ambiguous classes.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
SubSystem: Audio, Visual and Multimodal AI SubSystem
RBAD, a Python-based, logic-driven anomaly detector, uses predefined rules and data from CATFlow to alert on specific anomalies like jaywalking, off-schedule buses, rush-hour heavy vehicles, and pavement bikers.
SubSystem: Audio, Visual and Multimodal AI SubSystem
The RBAD is a lightweight, logic-based anomaly detector implemented in Python. It employs a predefined ruleset to detect very specific anomalies, based on the input messages coming from the CATFlow component. The input messages contain information about the objects detected in the video frame, as well as their location and time of detection. Based on this information, combined with the predefined rules, RBAD is able to create a specific anomaly alert. RBAD has been set up to detect the following situations: pedestrians jaywalking, buses arriving not on schedule, heavy-weight vehicles presence during rush hours, and bikers using the pavement.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
Advanced MEMS microphones
Collect high-quality acoustic data for speech recognition, audio recording, and noise cancellation.
Advanced MEMS microphones
The Advanced MEMS microphones cater customer needs by gathering high quality, low noise acoustic data, which can be used for speech recognition, audio recording, and acquisition of surrounding noise for applications like noise cancellation.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : IFAG
Expected TRL : 8
Licensing : Open source License TBD
Video
AVDrone
System that classifies crowd behavior while ensuring user privacy.
AVDrone
The proposed system will enable target users to classify the crowd behaviour without breaking any privacy issues.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : UNS
Expected TRL : 5
Licensing : TBD
Video
AVRegistry
Store and access metadata of MARVEL Audio-Visual sources using a RESTful API compliant with Smart Data Models standard.
AVRegistry
The AV Registry stores metadata of MARVEL Audio-Visual sources in compliance with the Smart Data Models standard and exposes this information through a RESTful API to all MARVEL components that need to consume AV data.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
sensMiner
SensMiner Android app records and annotates environmental acoustics.
sensMiner
SensMiner is an Android app developed to record environmental acoustics as well as user annotations. While the audio is being recorded, the user can in parallel annotate it and store the corresponding segment in the phone memory.
SubSystem : Sensing and perception
Type of Component : Software
Owner : AUD
Expected TRL : 8
Licensing : Proprietary
EdgeSec-VPN
The EdgeSec VPN is a security and privacy solution that is part of the E2F2C framework developed in the MARVEL project. It uses peer-to-peer VPN technology to encrypt data transferred between MARVEL components, ensuring strict communication security.
EdgeSec-VPN
EdgeSec VPN brings security and privacy features to the complete E2F2C framework, developed within the MARVEL project. Based on the technology of peer-to-peer VPNs, it is used to encrypt any data that is transferred between the MARVEL components to meet the requirements of a strict communication security. All participating computing devices form a full mesh network where every device has a direct secure connection with every other device.
SubSystem : Sensing and perception
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Open source License TBD
Video
EdgeSec-TEE
EdgeSec TEE ensures confidential computing for Python apps that process sensitive user data using Trusted Execution Environments.
EdgeSec-TEE
EdgeSec TEE offers confidential computing for python applications that process sensitive user data and is based on the technology of Trusted Execution Environments. It guarantees that the code itself and the data that needs to be processed are located inside protected and isolated execution environments that enable confidentiality and integrity, even if processed in untrusted environments.
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Proprietary
Video
VideoAnony
VideoAnony component detects faces and number plates, and performs anonymization to simplify surveillance camera data analysis in public spaces.
VideoAnony
The VideoAnony component detects people’s faces and car number plates and performs anonymisation: by blurring in the initial phase and potentially by swapping faces at a later stage. The component should simplify the use of data recorded by surveillance cameras for monitoring and analysing public spaces.
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
Video
AudioAnony
AudioAnony component anonymizes audio streams by replacing speaker voices with preserved speech features and environmental background.
AudioAnony
The goal of this component is to anonymize audio stream by replacing the speaker voice with another one while preserving the other speech fearure and the environmental background. Operating close to the microphones on edge device, or at the beginning of the procesing pipeline, audioanony limits possible privacy issues in the collection and processing of audio data. The component can deployed on a low-end edge device connected to the microphones to provide an anonymized stream in a transparent way
SubSystem : Security, privacy and data protection
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
Video
Data Fusion Bus
The Data Fusion Bus (DFB) allows efficient and trustworthy transfer of heterogeneous data between connected components and permanent storage.
Data Fusion Bus
The Data Fusion Bus (DFB) is a fully customiseable system that supports a trustworthy and efficient transfer of streamed heterogeneous data between multiple connected components and a permanent storage. Within MARVEL, the DFB aggregates AI inference results from all E2F2C layers to post-process, store and re-distribute them to SmartViz and Data Corpus both in real time and as archived data.
SubSystem : Security, privacy and data protection
Type of Component : Software / Service
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
StreamHandler
StreamHandler is a powerful distributed streaming platform based on Apache Kafka, designed for big data applications. It provides high performance, interoperability, resilience, scalability, and security.
StreamHandler
StreamHandler is a high-performance distributed streaming platform for handling real-time data, based on Apache Kafka. The solution is geared toward big data applications and offers interoperability, resilience, scalability and security. In MARVEL, StreamHandler offers its audiovisual hadling capabilities, providing persistence for data streamed by microphones and cameras of the project. This way, the user can consult aural and visual evidence for events and anomalies detected by MARVEL’s AI components, and act accordingly.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner : INTRA
Expected TRL : 8
Licensing : Proprietary
Video
DatAna
DatAna is a versatile platform that allows for scalable acquisition, transformation, and communication of streaming data across different layers of computing.
DatAna
DatAna is a scalable data acquisition, transformation and communication platform based on the Apache NiFi ecosystem. DatAna allows processing of streaming data at different layers (edge, fog and cloud) and their routing towards other layers of the computing continuum. In MARVEL, data from the AI inference models is validated, transformed to be compatible with the Smart Data Models initiative and subsequently transmittted to the upper layers for further processing and storage.
SubSystem : Data management and distribution
Type of Component : Software
Owner : ATOS
Expected TRL : 6
Licensing : Open source License TBD
Video
CATFlow
CATFLOW is a powerful tool for anonymizing road camera data and presenting it in a user-friendly dashboard.
CATFlow
CATFLOW transforms road camera information to anonymous data. Through a few clicks, our dashboards then present this data and provide critical insights and detailed reports identifying a range of mobility types, trajectories, and junction turning ratios.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : GNR
Expected TRL : 7
Licensing : Proprietary
Video
devAIce
devAIce SDK is a powerful software development kit designed to help customers implement advanced audio analytics on their local premises.
devAIce
devAIce SDK is a full-blown audio analysis SDK meant to enable customers to perform intelligent audio analytics on their local premises.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : AUD
Expected TRL : 9
Licensing : Proprietary
Visual Anomaly Detection
Visual Anomaly Detection is an advanced tool that utilizes cutting-edge deep learning models to detect abnormal events in video streams with high accuracy.
Visual Anomaly Detection
Visual Anomaly Detection implements efficient, state-of-the-art deep learning models to detect abnormal events in video streams. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and anotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
SED@Edge
SED@Edge is an innovative solution that enables the detection of urban acoustic events using deep learning models on low-cost, low-power microcontrollers with limited computational capabilities.
SED@Edge
SED@Edge implements state-of-the-art deep learning models for the detection of urban acoustic events on low-cost low-power microcontrollers with very limited computational capabilities. Operating on edge IoT devices located very close to the microphones, this solution brings a considerable energy and badnwith reduction with respect to cloud based solutions.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software
Owner : FBK
Expected TRL : 6
Licensing : Open source License TBD
Video
Audio-Visual Anomaly Detection
Audio-Visual Anomaly Detection is a powerful component for detecting abnormal events in multimodal streams.
Audio-Visual Anomaly Detection
Audio-Visual Anomaly detection implements efficient, state-of-the-art deep learning models to detect abnormal events in multimodal streams. The multimodality of the AVAD places it on the cutting edge of research into the usage of multimodal data in the deep learning anomaly detection methods. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and annotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Visual Crowd Counting
The VCC uses computer vision and deep learning techniques to accurately count and classify vehicles passing through a given area.
Visual Crowd Counting
The VCC implements proven, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The VCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Audio-Visual Crowd Counting
The AVCC (Audio-Visual Crowd Counting) component is a solution for counting the number of people in a crowd from audio-visual data.
Audio-Visual Crowd Counting
The AVCC implements novel, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The AVCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
GPURegex
FORTH platform includes a real-time stream processing engine designed to handle large volumes of data in real-time.
GPURegex
FORTH offers a real-time high-speed pattern matching engine that leverages the parallelism properties of GPGPUs to accelerate the process of string and/or regular expression matching. It is offered as a C API and allows developers to build applications that require pattern matching capabilities while simplifying the offloading and acceleration of the workload by exploiting the available GPU(s).
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Software
Owner : FORTH
Expected TRL : 5
Licensing : Proprietary
Video
SmartViz
SmartViz is a data visualization solution that aims to help domain experts understand complex data sets.
SmartViz
SmartViz is a cutting-edge data visualization solution that empowers domain experts to make sense of complex data sets. With its intuitive and user-friendly interface, SmartViz makes it easy for data practitioners to collect, analyze, and visualize data in real time. Whether you are working with big data or real-time data sources, SmartViz has got you covered. With its advanced temporal representations, AI-assisted insights, and customizable dashboards, SmartViz is the perfect solution for businesses looking to streamline their data analysis processes and make data-driven decisions quickly and effectively. Whether you need to monitor data in real time or conduct exploratory data analysis, SmartViz has the features you need to get the job done.
SubSystem : System outputs
Type of Component : Software
Owner : ZELUS
Expected TRL : 6
Licensing : Apache 2.0
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
SubSystem: Audio, Visual and Multimodal AI SubSystem
RBAD, a Python-based, logic-driven anomaly detector, uses predefined rules and data from CATFlow to alert on specific anomalies like jaywalking, off-schedule buses, rush-hour heavy vehicles, and pavement bikers.
SubSystem: Audio, Visual and Multimodal AI SubSystem
The RBAD is a lightweight, logic-based anomaly detector implemented in Python. It employs a predefined ruleset to detect very specific anomalies, based on the input messages coming from the CATFlow component. The input messages contain information about the objects detected in the video frame, as well as their location and time of detection. Based on this information, combined with the predefined rules, RBAD is able to create a specific anomaly alert. RBAD has been set up to detect the following situations: pedestrians jaywalking, buses arriving not on schedule, heavy-weight vehicles presence during rush hours, and bikers using the pavement.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
MARVEL Assets Demo Series - YOLO SED Demo
YOLO-SED component combines YOLO visual and SED audio analyses for anomaly detection in audiovisual data, enhancing accuracy and sending alerts via an MQTT broker.
MARVEL Assets Demo Series - YOLO SED Demo
Description: The YOLO-SED component is aimed at analyzing audiovisual data on the edge and detecting anomalies using the YOLO object detector and SED audio analysis module. The outputs of these modules are fused together, and the anomaly prediction is sent to an MQTT broker. The SED subsystem detects sound event activity from the given audio segment to enhance visual detections of ambiguous classes.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
AVRegistry
Store and access metadata of MARVEL Audio-Visual sources using a RESTful API compliant with Smart Data Models standard.
AVRegistry
The AV Registry stores metadata of MARVEL Audio-Visual sources in compliance with the Smart Data Models standard and exposes this information through a RESTful API to all MARVEL components that need to consume AV data.
SubSystem : Sensing and perception
Type of Component : Hardware / Software
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
Data Fusion Bus
The Data Fusion Bus (DFB) allows efficient and trustworthy transfer of heterogeneous data between connected components and permanent storage.
Data Fusion Bus
The Data Fusion Bus (DFB) is a fully customiseable system that supports a trustworthy and efficient transfer of streamed heterogeneous data between multiple connected components and a permanent storage. Within MARVEL, the DFB aggregates AI inference results from all E2F2C layers to post-process, store and re-distribute them to SmartViz and Data Corpus both in real time and as archived data.
SubSystem : Security, privacy and data protection
Type of Component : Software / Service
Owner : ITML
Expected TRL : 6
Licensing : TBD
Video
StreamHandler
StreamHandler is a powerful distributed streaming platform based on Apache Kafka, designed for big data applications. It provides high performance, interoperability, resilience, scalability, and security.
StreamHandler
StreamHandler is a high-performance distributed streaming platform for handling real-time data, based on Apache Kafka. The solution is geared toward big data applications and offers interoperability, resilience, scalability and security. In MARVEL, StreamHandler offers its audiovisual hadling capabilities, providing persistence for data streamed by microphones and cameras of the project. This way, the user can consult aural and visual evidence for events and anomalies detected by MARVEL’s AI components, and act accordingly.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner : INTRA
Expected TRL : 8
Licensing : Proprietary
Video
Automated Audio Captioning
The MARVEL Dashboard provides an intuitive interface that allows users to visualize and interact with the data generated by audio captioning systems.
Automated Audio Captioning
AAC component implements a state-of-the-art method for automated audio captioning. Automated audio captioning is used in MARVEL to analyze continuous audio streams and to describe periodically the audio content with a textual description. These textual descriptions give a brief summary of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Service
Owner : TAU
Expected TRL : 3
Licensing : Open source License TBD
Video
Sound Event Detection
SED component implements a state-of-the-art method for sound event detection.
Sound Event Detection
SED component implements a state-of-the-art method for sound event detection. Sound event detection is used in the MARVEL to analyze continuous audio streams and to detect active sound events in the stream. Detected sound events give a detailed view of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
Sound Event Localisation and Detection
SELD component is a state-of-the-art method for sound event localization and detection.
Sound Event Localisation and Detection
SELD component implements a state-of-the-art method for sound event localization and detection. Sound event localization and detection is used in the MARVEL to analyze continuous audio streams, detect active sound events, and identify their location with respect to the audio capturing device. Detected and localized sound events give a detailed view of actions in the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
Federated learning
Federated learning is a distributed machine learning approach that allows multiple clients to collaboratively train a shared model while keeping their data private.
Federated learning
Federated learning offers an effective technique for model training with privacy protection and network bottleneck mitigation at the same time. As a result, such distributed approach has been widely accepted as an effective technique for addressing the problem of large, complex, and time- consuming training procedures, and is also particularly suited for edge computing.
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Service
Owner : UNS
Expected TRL : 5
Licensing : Apache 2.0
Video
MARVdash
MARVdash is a platform that provides a web-based environment for data science in Kubernetes-based environments.
MARVdash
SubSystem : Optimised E2F2C Processing and Deployment Type of Component : Service Owner : FORTH Expected TRL : 5 Licensing : Apache 2.0
Video
HPC Infrastracture
High-performance computing resources can greatly enhance the speed and efficiency of AI-based algorithms, especially when dealing with large amounts of data.
HPC Infrastracture
The HPC and cloud infrastructure in the MARVEL project is provided by PSNC, which offers access to world-class solutions in terms of high-performance computing. It includes various computing, storage and network resources that allow taking full advantage of the AI-based algorithms for AV analytics within the MARVEL framework.
SubSystem : E2F2C Infrastructure
Type of Component : Service
Owner : PSNC
Expected TRL : 9
Licensing : Proprietary
Video
Management and orchestration of HPC resources
PSNC's role as a technology partner in the MARVEL project includes providing technical support and tools for managing and orchestrating computing resources, including high-performance computing resources, as well as various storage services.
Management and orchestration of HPC resources
PSNC as a technology partner provides technical support and tools for managing and orchestrating computing resources and various class storage services connected with the HPC system.
SubSystem : E2F2C Infrastructure
Type of Component : Service
Owner : PSNC
Expected TRL : 9
Licensing : Proprietary
MARVEL Data Corpus-as-a-Service
MARVEL Data Corpus has the potential to create significant impact on both SMEs and the international scientific and research community. By providing access to these data assets, SMEs and startups can test and build innovative applications, potentially creating new business opportunities.
MARVEL Data Corpus-as-a-Service
MARVEL Data Corpus will give the possibility for SMEs and start-ups to test and build on top of these data assets their innovative applications, thus creating new business by exploring extreme-scale multimodal analytics. In addition, by adopting an SLA enabled Big Data analytics framework, it is expected to maximise the impact that the MARVEL corpus will have on the international scientific and research community
SubSystem : System outputs
Type of Component : Service
Owner : STS
Expected TRL : 5
Licensing : Public
Video
Audio Tagging
AT component in MARVEL implements a state-of-the-art method for audio tagging, which is used to analyze continuous audio streams and recognize active sound classes in the stream.
Audio Tagging
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Audio, visual and multimodal AI
Type of Component : Service
Owner : TAU
Expected TRL : 5
Licensing : Open source License TBD
Video
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
Hierarchical Data Distribution (HDD)
HDD (High-speed Distributed Data) is a software component that optimizes the topic partitioning process in Apache Kafka, a popular distributed streaming platform.
Hierarchical Data Distribution (HDD)
HDD considers the problem of Apache Kafka data topic partitioning optimisation. Even though Apache Kafka provides some out-of-the-box optimisations, it does not strictly define how each topic shall be efficiently distributed into partitions. HDD models the Apache Kafka topic partitioning process for a given topic, and, given the set of brokers, constraints and application requirements, solves the problem by using innovative heuristics.
SubSystem : Data management and distribution
Type of Component : Methodology
Owner : CNR
Expected TRL : 6
Licensing : MIT
Video
Visual Anomaly Detection
Visual Anomaly Detection is an advanced tool that utilizes cutting-edge deep learning models to detect abnormal events in video streams with high accuracy.
Visual Anomaly Detection
Visual Anomaly Detection implements efficient, state-of-the-art deep learning models to detect abnormal events in video streams. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and anotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Audio-Visual Anomaly Detection
Audio-Visual Anomaly Detection is a powerful component for detecting abnormal events in multimodal streams.
Audio-Visual Anomaly Detection
Audio-Visual Anomaly detection implements efficient, state-of-the-art deep learning models to detect abnormal events in multimodal streams. The multimodality of the AVAD places it on the cutting edge of research into the usage of multimodal data in the deep learning anomaly detection methods. The unsupervised or weakly-supervised nature of this component decreases the effort required to gather and annotate the data, while maintaining high generalisation ability.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Visual Crowd Counting
The VCC uses computer vision and deep learning techniques to accurately count and classify vehicles passing through a given area.
Visual Crowd Counting
The VCC implements proven, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The VCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
Audio-Visual Crowd Counting
The AVCC (Audio-Visual Crowd Counting) component is a solution for counting the number of people in a crowd from audio-visual data.
Audio-Visual Crowd Counting
The AVCC implements novel, state-of-the-art methods for crowd counting trained using transfer learning to allow high performance even on small datasets. The AVCC component is accompanied by a host of auxilliary functionalities designed with ease of use and integration within the system framework in mind.
SubSystem : Audio, visual and multimodal AI
Type of Component : Software / Methodology
Owner : AU
Expected TRL : 5
Licensing : Open source License TBD
Video
DynHP
DynHP, short for Dynamic Headroom Management, is a software solution that optimizes the use of computing resources in edge devices that have limited capabilities.
DynHP
DynHP makes it possible to execute the interference and training steps of audio-video real-time analytics on devices with limited compute capabilities at the edge of the network, which otherwise cannot be used due to insufficient memory or energy or processing resources.
SubSystem : Optimised E2F2C Processing and Deployment
Type of Component : Methodology
Owner : CNR
Expected TRL : 4
Licensing : MIT
Video
MARVEL Assets Demo Series - YOLO SED Demo
YOLO-SED component combines YOLO visual and SED audio analyses for anomaly detection in audiovisual data, enhancing accuracy and sending alerts via an MQTT broker.
MARVEL Assets Demo Series - YOLO SED Demo
Description: The YOLO-SED component is aimed at analyzing audiovisual data on the edge and detecting anomalies using the YOLO object detector and SED audio analysis module. The outputs of these modules are fused together, and the anomaly prediction is sent to an MQTT broker. The SED subsystem detects sound event activity from the given audio segment to enhance visual detections of ambiguous classes.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
SubSystem: Audio, Visual and Multimodal AI SubSystem
RBAD, a Python-based, logic-driven anomaly detector, uses predefined rules and data from CATFlow to alert on specific anomalies like jaywalking, off-schedule buses, rush-hour heavy vehicles, and pavement bikers.
SubSystem: Audio, Visual and Multimodal AI SubSystem
The RBAD is a lightweight, logic-based anomaly detector implemented in Python. It employs a predefined ruleset to detect very specific anomalies, based on the input messages coming from the CATFlow component. The input messages contain information about the objects detected in the video frame, as well as their location and time of detection. Based on this information, combined with the predefined rules, RBAD is able to create a specific anomaly alert. RBAD has been set up to detect the following situations: pedestrians jaywalking, buses arriving not on schedule, heavy-weight vehicles presence during rush hours, and bikers using the pavement.
SubSystem :Audio, Visual and Multimodal AI SubSystem
Type of Component : Software / Methodology
Owner : GRN
Expected TRL : 5
Licensing : MIT License
Video
Data Management Platform (DMP)
The Data Management Platform is a holistic solution for real-time analytics, with DatAna, DFB, StreamHandler, and HDD components for data processing and optimization.
Data Management Platform (DMP)
AT component implements a state-of-the-art method for audio tagging. Audio tagging is used in the MARVEL to analyze continuous audio streams and to recognize active sound classes in the stream. Recognized active sound classes give a rough view of the scene.
SubSystem : Data management and distribution
Type of Component : Software / Service
Owner :ATOS / ITML / IFAG / CNR
Expected TRL : 6
Licensing : Open source License TBD
Video
IPR Management
There are 32 assets owned by various owners. In addition, although it is still early, as some partners have not yet decided under which type of license, they want to market their results, there is a perceived mix of Open Source licenses, but also Proprietary licenses, so a 100% Open Source approach will not be possible. In the future, the asset owners will consider the possibility of releasing parts of the developed technologies under MARVEL as Open Source approach. The Open Source licenses provided up to now, as MIT or Apache 2.0 do not present incompatibilities among them.
Experience the Future of Smart Cities Now! Join MARVEL!
Contact Us Today to Start Your Smart City Journey, request a Demo !
Funding
This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 957337. The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.