Follow Us

Follow us on Twitter  Follow us on LinkedIn
 

08 April 2019

ACCA welcomes the publication of the EC’s High-Level Expert Group on Artificial Intelligence’s Guidelines for Trustworthy AI


ACCA supports the aims of these important guidelines to establish a clear and comprehensive framework for achieving trustworthy, ethical and robust AI. Confidence in the technology’s development and its applications is vital for its success and society’s buy-in.

For ACCA, ethics is an area where people must take the lead when it comes to machine intelligence. The ethical side is largely human-driven today, and ACCA shares the High-Level Expert Group on Artificial Intelligence (HLEG on AI)’s view that AI systems need to be human-centric, committed to the service of humanity and the common good, with the goal of improving human welfare and freedom.

The Group’s Guidelines advocate a joined-up approach to AI ethics based on the fundamental rights enshrined in the EU Treaties, the EU Charter and international human rights law. ACCA welcomes this approach, believing that respect for fundamental rights and of the existing Code of Ethics for the profession provides the right foundations to identify the ethical principles and values that need to be adapted for AI.

Narayanan Vaidyanathan, head of business insights at ACCA says: ‘According to 94 per cent of respondents to an ACCA survey on Ethics and Trust in the Digital Age, technology may have an impact on the details one needs to understand in order to be ethical, but it does not change the importance of being ethical.

‘The fundamental principles for accountants, established by the IESBA, still apply and remain relevant in the digital age. AI is not designed to discover its own ethics, and it is our job to draw the ethical lines from a regulatory perspective to decide how to use the information in the right way. The Ethics Guidelines for Trustworthy AI are therefore a right step in this direction.

When considering the potential of AI and machine Learning (ML), professional accountants need to think not only of the potential benefits and long-term sustainable advantages, but also of the risks posed by AI systems, which may have a negative impact and unintended consequences. Managing the risks depends in no small way on ensuring that ethical considerations are given sufficient emphasis when exploring AI and ML adoption. ACCA welcomes the recommendation to adopt adequate measures to mitigate these risks when appropriate,’ Narayanan Vaidyanathan adds.

ACCA also shares the HLEG on AI’s seven key requirements for trustworthy AI, namely human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability.

Full press release

Related press release



© ACCA - Association of Chartered Certified Accountants


< Next Previous >
Key
 Hover over the blue highlighted text to view the acronym meaning
Hover over these icons for more information



Add new comment