Artificial intelligence (AI) has emerged as a transformative force in corporate governance, reshaping the way
that data is processed, and offering deeper data-driven insights. Effectively and ethically incorporating AI
into boardroom decision-making could be a game-changer for businesses.
With the latest release of Malaysia’s National Guidelines on AI Governance and Ethics, companies are now
better equipped to incorporate AI into their operations. This article explores how AI can be harnessed to
improve oversight by directors of Malaysian companies, while highlighting key developments in the field.
In less than two years since the launch of the artificial intelligence (AI) chatbot ChatGPT, AI technology has
revolutionised the global industrial landscape. The uptake of AI by companies has surged, driven by increasing
recognition of its potential to create business value through automating routine tasks, forecasting market trends, and detecting fraud.
In a 2024 survey conducted by McKinsey Global Institute, 65 per cent of respondents reported
that their organisations regularly use generative AI – nearly twice as much than the previous year.
The global survey, which provides a snapshot of AI uptake across a wide range of regions and industries, also
found that AI usage has increased across all management levels. Eight per cent of mid-level managers and 12 per
cent of senior managers said they now use AI regularly for work, up from 7 per cent and 10 per cent respectively
in 2023.
The most significant increase was at board level, with the percentage of C-suite executives who use AI regularly
for work doubling from 8 to 15 per cent over the past year.
“The board of directors’ multidimensional role encompasses strategic planning, legal compliance, fiduciary
duties, financial oversight, governance oversight, and risk management – all areas where AI's influence is
poised to be significant,” said Bopana Ganapathy, vice president for data science and analytics at the American
advertising services firm Epsilon, in a
recent opinion piece.
“With its ability to process vast amounts of data quickly and efficiently, AI can assist directors in making more
informed decisions,” Ganapathy added.
Despite the rapid pace of AI uptake in organisations, regulation of its usage remains low, with just 18 per cent
of respondents reporting that their workplace has an enterprise-wide council or board with the authority to make
decisions on responsible AI governance.
As such, global regulators have stepped up development of new standards and policies to govern the use of AI and
address pertinent security risks, such as leakage of sensitive information, algorithmic bias, misinformation,
and a general lack of transparency with AI models.
In May this year, the Council of the European Union gave final approval to an Artificial Intelligence Act that aims to foster safe and trustworthy AI
systems across the EU. In 2023, the Organisation for Economic Cooperation and Development (OECD) reported over
930 policy initiatives developed across 71 jurisdictions based on its AI principles.
Malaysia’s National AI Standards
Among Southeast Asian nations AI governance remains nascent, with Malaysia being one of the first few in the
region to have developed a national AI governance framework.
The federal government has accelerated Malaysia’s efforts to embrace AI and establish the country as one of the
region’s leading digital economies stemming from its understanding of AI's potential to spur economic growth,
noted Ganesh Kumar Bangah, executive chairman of Australia-listed digital marketing solutions provider Xamble
Group.
Bangah is also an advisor to the National Tech Association of Malaysia (PIKOM), one of the collaborating
organisations for the country’s first national guideline document – the Malaysian
National Artificial Intelligence Roadmap 2021-2025 (AI-RMAP).
Launched in August 2021 by the Ministry of Science, Technology and Innovation (MOSTI), the AI-RMAP set out
frameworks for the integration of AI into different sectors of the economy through policy initiatives; and
serves as a blueprint for the nation’s AI trajectory over the next five years.
Bangah believes the AI-RMAP is a timely development and noted that companies have responded proactively,
indicating an increasing recognition of the importance of responsible AI usage, and the importance of guidelines
to enhance trust and credibility among all stakeholders.
To achieve the objectives of the AI-RMAP, MOSTI has also developed an AI Governance and Ethics (AIGE) Framework
launched on 20 September this year. Focusing on the ethical development and deployment of AI, the AIGE
establishes a national code of ethics and governance that aligns with global standards for sustainable
development and corporate responsibility.
Ong Chin Seong, PIKOM chairman and spokesperson, told Bursa Sustain that he looks forward to engaging with MOSTI
to enhance AIGE further. He welcomed the rise of a nationwide AI governance framework but added that industry
self-regulation must be deployed alongside government regulations.
Ong also noted that regulations are generally slower than the development of new technologies, so companies may
not move ahead to tap into the latest advancements if they are overly focused on compliance.
“Burdensome regulations can make it expensive and time-consuming for companies to develop and implement AI. This
could stifle innovation, particularly for small businesses,” he warned.
Presently, the implementation of AIGE remains voluntary for stakeholders.
Earlier this year, PIKOM published their own AI Ethics policy
paper, also developed based on AI-RMAP policies. The policy paper aims to provide additional policy
guidelines on continuous education and multi-stakeholder collaboration.
According to Ong, this set of ethics-centric policies focus on a risk-based approach, which should be reviewed
regularly to align with industry needs and technology maturity in Malaysia.
|
|
|
Ganesh Kumar Bangah, executive chairman of Xamble group |
Ong Chin Seong, chairman of the National Tech Association of Malaysia (PIKOM) |
Dr. Matthew Wong, co-founder of CarbonGPT |
As Malaysia works towards setting up a national AI office, as announced by Prime Minister Anwar Ibrahim in
October, boardrooms that choose to harness the power of AI in decision-making should be prepared to step up on
governance to ensure ethical, sustainable, and responsible use of the technology.
The latest set of guidelines states how key principles for responsible AI can be translated into strengthened governance, and includes guidance for better data governance, data sharing, evaluation of AI use and performance as well as
management of risks.
Upskilling Boards for Responsible AI Adoption
Directors should foster a culture of learning and adaptation towards AI, so that they can stay abreast of the
evolving technological and regulatory landscapes said Bangah.
His view was echoed by Dr Matthew Wong, co-founder of sustainability reporting AI platform CarbonGPT. Wong
expressed concern that more than half of the Malaysian directors he has spoken to have never used ChatGPT,
either due to a knowledge gap or resistance towards the uncertainties surrounding cost and implementation.
“Public awareness is still rudimentary,” Wong told Bursa Sustain.
To overcome the initial hurdle towards AI adoption, board members could consider investing in training so employees can
learn about the technology’s capabilities, limitations and risks.
Some of the key skills employees should develop include data analysis, an understanding of machine learning,
basic programming, and project management. Acquiring the knowledge to develop AI use case strategies, as well as
interpret and present results derived from these tools, is also equally important.
According to Wong, supportive communities for AI adoption are becoming available for senior managers and board
members who wish to access resources and content for upskilling. For example, the World Economic Forum launched
a series of upskilling platforms and materials through a global AI Governance Alliance to ensure an equitable AI transition within the workforce.
Bangah highlighted three key challenges facing directors when it comes to adopting AI responsibly:
explainability, risk and accountability.
-
Explainability refers to the ability of boards to justify the use of AI in decision-making to
stakeholders. The seemingly opaque nature of many AI algorithms makes understanding the
decision-making processes of AI complex.
-
Risk refers to the potential for AI systems to make errors and biases, or be manipulated in favour of
certain users. Boards need to be supported by the most recent understanding of AI’s limitations to
quell concerns over its reliability and fairness.
-
Accountability refers to the assignment of responsibility and liability for the outcomes of and the
actions advised by AI systems. Proper accountability is difficult when both explainability and risks
of AI are not fully understood.
When it comes to the adoption of AI technology, directors ultimately have a fiduciary
duty to act in good faith and in the best interests of the company, as well as to exercise reasonable
care, skill and diligence in their roles. As an artificial entity, AI cannot fulfil these fiduciary duties and
so cannot be held liable the same way human directors can, said Ganapathy.
In 2023, global consultancy Deloitte found that most boards do not have primary oversight for AI within the company, and of
those which did, the responsibility was entrusted most often to the audit committee. Forty-four per cent of
companies had yet to discuss AI-related topics.
As such, boards must establish robust AI governance frameworks that clearly articulate the company’s approach to
using and governing AI, emphasised Ganapathy. These frameworks should also demonstrate compliance with ethical
and regulatory on AI guidelines to reassure customers, regulators and partners that the board and the company
are responsible AI users.
To guard against risks, Bangah and Wong suggested using explainable AI (XAI) techniques coupled with regular
internal audits on AI applications and data privacy risks. XAI is a set of techniques that describes an AI model in terms of prediction accuracy against training
data and traceability by limiting decision pathways through narrowing down rules and features.
-
Identify individuals and establish positions best placed to make critical decisions about
procurement, maintenance, and risk assessments of AI systems.
-
Establish governance structures at board and management levels.
-
Promote a culture that emphasises both opportunities and risks related to AI systems, in line with
the organisation’s risk appetite.
-
Communicate the organisation’s values that describe how choices are made towards AI adoption.
-
Develop specific guidance and procedures for teams directly involved in buying, designing,
developing, and using AI systems.
-
Assess the organisation’s supporting infrastructure, including a stock take of AI and data
inventories, checking that robust data governance practices are in place, and testing cyber security
policies.
-
Take a human-centric approach by consciously making inclusivity, accessibility and diversity part of
AI systems from the earliest stages of system development
-
Implement live or periodic monitoring and reporting systems including automated performance
assessments, as well as internal and external audits.
Source: UTS Human Technology Institute (HTI)