Results
Some results of the ongoing project are summarized here, including the abstracts of the project deliverables
Deliverable D 1.1: State of the Art (due at the end of month 6)
Summary. This deliverable provides an overview of the current state of the art in web accessibility, delving into the realm of assistive technologies. It also introduces to the field of conversational artificial intelligence, and explores how this innovative technology is currently being leveraged to deliver digital services to users. Furthermore, the chapter gives an overview on the advancements and applications of Large Language Models (LLMs), examining their role in the digital landscape and the ways in which they are utilized today. The document also discusses the LLMs potential and limitations related to their adoption in the PROTECT project.
Deliverable D1.2: Design space and Models for Conversational Web Browsing (due at the end of month 6)
Summary. This document provides a comprehensive definition of the context within which the subsequent project activities will be conducted. Specifically, it details the outcomes of previous research conducted at PoliMI in 2022. These findings have been instrumental in guiding the consortium in identifying the key design dimensions to be addressed in the development of a new paradigm for the Conversational Web. The selected design dimensions provide the basis for the ongoing revision and enhancement of this paradigm that will involve an extensive validation study, focusing on blind and visually impaired users.
Deliverable D1.3: Architectural models for the integration of Web Technologies and Conversational AI (due at the end of month 6)
Summary. This deliverable introduces architectural requirements and models for the provision of a platform enabling the Conversational Web paradigm. The document illustrates the architectural choices adopted for an initial platform prototype. It then highlights how Large Language Models (LLMs) can be adopted to improve the technical performance of the Web page interpretation algorithm, and the quality of the user experience for the final users. This last aspect can be in particular achieved thanks to the fluidity that LLM can confer to the dialogic interaction with the ConWeb conversational agent, even in absence of any specific training of the Natural Language Understanding component.
D4.2 Request for ethical approval (due at the end of month 6)
Summary. Ethical aspects and data management are carefully addressed in the PROTECT project. Several experimental studies are going to be performed, thus we have defined from the very beginning a precise plan on how to manage the ethical aspects of such studies and the collected data. The Ethical Committee of the University of Bari has been recently created but it is not operating yet, thus we could not present a request of approval of the user studies of the project. However, the studies are in collaboration with the colleagues of the Politecnico di Milano, who are formally requesting the approval for these studies to their Ethical Committee.
D2.1 Specification of the conversational patterns for Web browsing (due at the end of month 12)
Summary. This deliverable outlines the new patterns for conversational Web browsing deriving from the integration of Large Language Models (LLMs) within the ConWeb architecture described in the Deliverable D1-3. Specifically, integrating LLMs into the segment of the architecture responsible for Natural Language Understanding (NLU) has enabled new conversational patterns and corresponding intents that were previously unachievable with earlier implementations. These new patterns, intents, and their implementations through designated bots will be described in the following sections of this deliverable. The outcomes and insights will be reported, along with related potential evolutions.
D2.2 Design Transparency and Explainability in Conversational AI (due at the end of month 12)
Summary. This deliverable represents the culmination of efforts in Task T2.2 of the PROTECT project. This task investigated how to increase user trust in a conversational agent (CA) through innovative approaches to transparency and explainability. Building upon insights gathered in Tasks T1.1 and T1.2—addressing user requirements, challenges, and design space validation—Task T2.2 identified key factors undermining user trust, particularly concerning the perception of content manipulation by CAs. This deliverable explores current methodologies in Explainable AI; the goal of PROTECT is to shift from traditional global or local explanation paradigms to user-oriented explanations specifically tailored for end users rather than AI experts. These explanations are designed to improve system interpretability, foster trust, and empower users by allowing a clearer understanding and control over their interactions with the CA. This deliverable also examines the interplay between trust, trustworthiness, explainability, and transparency, with the aim of integrating them into CAs design.