AI bias and project MARVEL
Artificial intelligence (AI) is capable of processing large amounts of data and recognizing patterns that cannot be readily detected by humans. It can be used to automate decision-making processes, which can increase efficiency and reduce human error. Notwithstanding the obvious benefits of artificial intelligence, there are undeniable threats posed by its development and use. One of the greatest challenges to overcome is avoiding AI bias.
The notion of AI bias
AI bias refers to systematic and unfair decisions made by an artificial intelligence system that discriminates against certain individuals or groups based on their race, gender, age, religion, or other personal characteristics. AI bias can occur when the algorithms used to train the system are not representative of real-world data, and therefore the system is unable to make accurate predictions or decisions for all people equally. AI bias can have serious consequences, such as perpetuating social inequalities, reinforcing discriminatory practices, and violating individual rights. Therefore, it is important to identify and correct biases in AI systems to ensure fairness and ethical considerations.
Bias can arise from many factors, including but not limited to algorithm design or unintended or unanticipated use or decisions regarding how data is coded, collected, selected, or used to train the algorithm. Bias can enter algorithmic systems as a result of preexisting cultural, social, or institutional expectations, due to technical limitations of their design, through use in unanticipated contexts, or by audiences not considered in the original design of the software. AI systems can only be as unbiased as the data on which they are trained. Thus, if the data is biased, then the AI system is also biased.
Legal and ethical management and AI bias
Privanova performs the role of legal and ethics manager in several projects and therefore makes significant efforts to prevent AI bias in projects such as MARVEL. One of the main goals is therefore to raise awareness about the importance of dealing with bias in AI systems. Growing awareness provides opportunities for project partners, researchers, and developers to create more inclusive and equitable AI systems.
The rights and freedoms granted by EU Law protect, among other things, human dignity. For this reason, EU Law is described as a human-centered approach in which human beings enjoy a unique and inalienable moral status with primacy in the civil, political, economic, and social spheres. Fundamental rights and freedoms fall under the first component of the European Commission’s ‘Ethics Guidelines for Trustworthy AI. Apart from the fact that trustworthy AI (which is often considered lawful AI) is used to meet legal requirements, it is also the basis for compliance with ethical principles. Law and ethics, and consequently lawful AI and ethical AI, are conceptually closely related. Lawful AI is essentially about the development, deployment, and use of AI in accordance with various mandatory rules and laws. Taking into account that AI should improve individual and collective well-being, it would not be wrong to argue that the applied ethical principles for trustworthy AI are rooted in fundamental rights. Fundamental rights are ethical imperatives, and therefore all AI practitioners should tend to follow them.
The Guidelines for Trustworthy AI are based on seven key requirements that AI systems should meet. These requirements apply to different stakeholders such as developers, users, end users, and society in general. Thorough consideration of Trustworthy AI principles could lead to manageable strategies, tactics, and ultimately actions that could be implemented in specific contexts to prevent AI bias. With these principles in mind, the partners in project MARVEL have found their own way to prevent AI bias in the context of the research and development activities conducted within the project.
MARVEL and prevention of AI bias
One of the goals of the MARVEL project is to develop a lawful, ethical, and robust AI system. To achieve this, MARVEL relies on the principles of diversity, non-discrimination, and fairness for building a trustworthy AI system. Particular attention is paid to eliminating or mitigating potential AI biases in the developed AI models. MARVEL framework includes a variety of AI components. Therefore, all components were analyzed for the purpose of identifying potential AI biases. The sensitivity of each component was evaluated and precautionary measures were planned.
The analysis indicated that the likelihood of AI bias within the MARVEL AI system should be considered low. Nevertheless, the plan from MARVEL includes a risk assessment and the following precautionary measures:
- careful selection/augmentation of training datasets – no biased training will be used;
- ensuring that data come from a diverse and representative group of individuals;
- data acquisition will cover a fair time span;
- no selection or rejection of specific types of input data will be performed;
- continuous monitoring of the results to identify potential issues related to bias, discrimination, or poor performance of the AI system;
- ensure that the system’s components work reliably and efficiently in different cities in different countries;
- equal access to services developed for the architecture of MARVEL;
- the AI models/algorithms developed and the corresponding datasets will be made publicly available;
- a broad range of stakeholders will be involved in the design and development of the MARVEL AI system;
- the impact of the AI system on potential end users and/or subjects will be evaluated.
By creating and enforcing concrete activities to evaluate MARVEL AI components, project partners have demonstrated significant efforts to minimize ethical risks. Identifying potential risks at an early stage of project implementation and developing appropriate mitigation measures fundamentally contribute to one of the project goals – the development of a trustworthy AI system for the citizens of a smart city.
Blog signed by: PN team
Menu
- Home
- About
- Experimentation
- Knowledge Hub
- ContactResults
- News & Events
- Contact
Funding
This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 957337. The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.