Thanks to its ability to analyse vast amounts of data, business can use AI-powered software to offer certain products to certain consumers. Therefore, one potential outcome of AI is unfair discrimination between different groups of people, for instance on the basis of economic criteria or a person’s health condition. Companies could for example decide to first offer an innovative product to the most affluent people or withhold insurance offers to those who are ill.
- AI-based products and services must be user-friendly and legally compliant by default and by design. They must in particular respect EU consumer, safety and data protection rules. Discrimination and lack of transparency and/or privacy must be avoided.
- The right to object to an automated decision-making (ADM) process and to contest the decision it generates should exist. Users should have the right to transparency concerning the parameters around which offers are based, and to an explanation for why a machine has come up with a particular result.
- The EU should adopt appropriate liability rules for situations where consumers are harmed by unsafe or defective products, digital content products (such as online games) and services (a messaging app).
- As a general principle, companies must introduce effective mechanisms to allow audits of how AI/ADM uses people’s data. AI/ADM auditing should be carried out by independent third parties or specific public bodies.
- For certain sectors, ethical guidance for the development and use of AI can be important. However, ethics never can nor should replace laws that protect people, that are binding for business and that are enforceable. We need to make sure that existing rights are updated and that new protections are established when gaps appear due to these new developments.