Journal of Law, Cognitive Science 
and Artificial Intelligence




Topic: “Artificial Intelligence and fairness”


Dear colleagues,

The i-lex – Journal of Law, Cognitive Science and Artificial Intelligence invites scholars from all areas of law, computing and philosophy to contribute to the first issue of the journal’s 14th volume, which will be published online in the first semester of 2021. The issue is devoted to the problem of fairness in AI-based systems and applications.

AI is rapidly changing every aspect of our economy and society, enabling automated decision-making in domains that require complex choices, based on multiple factors, and on non-predefined criteria. AI-powered applications have been deployed to perform many tasks, such as assessment of investments, recruitment decisions, creditworthiness evaluation, and prototypes have been tested also in judicial matters, such as bail, parole and recidivism.

In recent years, a wide debate has taken place on the prospects and risks of algorithmic assessments and decisions concerning individuals. Some scholars have observed that in many domains automated predictions and decisions are not only cheaper, but also more precise and impartial than human ones. AI systems can avoid the typical fallacies of human psychology (overconfidence, loss aversion, anchoring, confirmation bias, representativeness heuristics, etc.), and the widespread human inability to process statistical data, as well as typical human prejudice (concerning, e.g., ethnicity, gender, or social background). Others have underscored the possibility that algorithmic decisions may be mistaken or discriminatory leading to unfairness. Only in rare cases will algorithms engage in explicit unlawful discrimination, so-called disparate treatment, basing their outcome on prohibited features (predictors) such as race, ethnicity or gender. More often a system's outcome will be discriminatory due to its disparate impact, i.e., since it disproportionately affects certain groups, without an acceptable rationale.

This issue intends to offer an overview of the technical, legal and philosophical issues related to AI-based systems and its potential for discrimination, as well as practical methods and normative solutions to ensure fairness in AI-based outcomes.

We encourage theoretical analysis and socio-legal inquiries, as well as the presentation of computational models and applications to support legal practice.

Potential topics for the call include (but are not limited to):

  • Concepts of fairness for AI application
  • Unfairness and discrimination through AI systems
  • Designing, implementing and deploying fair AI systems
  • Cognitive and statistically acquired biases
  • Fair data and algorithms
  • Transparency and explainability to address unfairness
  • Accountability for decisions made by AI
  • Fairness, privacy and data protection
  • Public policies concerning AI and fairness

Articles can be written both in English and in Italian. We are looking for research articles, research reports, and book reviews. Articles should preferably be of a length between 8 and 30 pages. All submissions will be subject to a blind peer review process.

Deadline for submission: 1 June, 2021.

Please draft your contributions using the template downloadable at:

Please submit your contributions to: This email address is being protected from spambots. You need JavaScript enabled to view it.

The PDF of the call for paper is available at this link:

Paper submission:

Please publish modules in offcanvas position.