The ESG Implications of AI Technology
by Amelia Zimmerman, FiscalNote
Generative AI raises a number of environmental, social, and governance (ESG) concerns. Here are the key opportunities and pitfalls of AI in the workplace.
In 2023, the release of ChatGPT created new concerns and resurfaced existing ones, while excitement over the potential of artificial intelligence (AI) grew. Philosophers, ethicists, and tech experts weighed in; it seemed that the world was divided between AI as an opportunity and AI as a threat.
But as AI-based tools become increasingly important fixtures in the corporate world, it is not just philosophers, programmers, and lawmakers who need to carefully consider what an AI-driven future means. Corporations large and small should be particularly concerned about the environmental, social, and governance-related (ESG) risks of this new and powerful technology.
Here, we explore the ethical concerns associated with AI use in the corporate setting and best practices for companies looking to leverage the technology without getting into hot water.
Top 5 ESG Trends to Watch in 2024
2024 will be an important year for ESG and corporate sustainability policy. Here are five trends to watch.
The Ethical Concerns of Generative AI
Social Implications
Many companies are turning to AI-based tools to make hiring processes simpler; some are exploring the technology’s ability to make firing decisions, too. With generative AI set to disrupt over 12 million jobs between now and 2030, it is changing how humans are employed, according to Ed Watal, CEO at Intellibus, a software company.
“The use of generative AI tools to power the recruiting processes is increasingly becoming the norm,” he explains. “Companies are using AI algorithms for all parts of the hiring process, including resume parsing, candidate screening, and even final decision-making recommendations.” Since many algorithms lack transparency, “there is no easy way to determine the presence or extent of racial, social, gender, or economic bias.”
AI systems can inherit biases from historical data, which may perpetuate existing inequalities and discrimination in practices such as hiring and promoting. While modern ESG strategies may focus heavily on improving outcomes for certain groups based on race, gender, age, or other characteristics, leaning too heavily on AI for HR decision-making purposes may interfere with ESG objectives.
On the flip side, properly trained AI models could have enormously positive effects on existing discriminatory practices and systems. It could reduce bias in hiring by focusing purely on skills and qualifications, identifying talent from a wider range of sources, and using detailed analytics to improve workplace design to create more inclusive environments.
Ashu Dubey, co-founder and CEO of Gleen, a generative AI company, is optimistic about the potential. “Generative AI could be used to help train employees and answer questions about interviewing and hiring practices, improving fair hiring and compliance with other labor laws,” he explains. “In addition, a generative AI chatbot could be deployed by enterprises to help answer questions and better communicate corporate benefits.”
Governance Challenges
Governments are paying increasing attention to AI technology, resulting in a new wave of policies and regulations that will create new compliance risks for companies. Already, corporations must comply with data protection and privacy regulations, such as theGeneral Data Protection Regulation (GDPR) in Europe or the California Consumer Privacy Act (CCPA) in the United States.
The data collected for AI applications or shared with third-party AI service providers may be misused or mishandled, potentially leading to privacy violations and legal repercussions. Additionally, data breaches can lead to unauthorized access, disclosure of sensitive information, and reputational damage.
From a governance perspective, we’re seeing a range of corporate responses to AI. “Organizations like Apple, JP Morgan, Verizon, and Amazon have all banned the use of tools like ChatGPT at work,” says Watal. “Other organizations have used limits on the quantum of data that can be uploaded.”
Dubey says that companies are particularly concerned about employees entering confidential data into tools such as ChatGPT. “A number of corporations have banned employees from accessing ChatGPT,” he explains.
At the same time, OpenAI, ChatGPT’s parent company, “has introduced ChatGPT Enterprise, which allows corporations to offer a ‘private’ version of ChatGPT to their employees [preventing confidential information from being leaked to the public],” says Dubey. “And a number of corporations have embarked on building their own generative AI chatbot, including McKinsey and Walmart.”
Environmental Implications
One often overlooked consequence of a world that runs on AI is the significant computing power required. Most data centers today are powered by fossil fuels and consume vast amounts of water for cooling. Watal is concerned about the environmental impact of generative AI. “Per BCG’s analysis,” he explains, “total data center energy consumption may grow closer to 7.5 percent [of global totals] by 2030. Generative AI is supposed to contribute to about 1 percent of that.”
The ESG Landscape: Trends and Standards Monitor
The latest trends and issues in the global ESG landscape
Best Practice for AI in 2024 and Beyond
A number of best practice guidelines can help companies navigate the early days of generative AI, both maximizing opportunity and minimizing the risk of this new technology.
Bias and Fairness
“An AI is only as good as its knowledge and instructions,” says Dubey. Ensure these are ethical and comply with all regulations. Also, make sure AI algorithms are trained on diverse and representative datasets and regularly audited for bias, and implement mitigation strategies such as debiasing and fairness testing.
Data Privacy and Security
Companies must adhere to best-practice data minimization and consent principles, and ensure that individuals are aware of how their data is used in AI systems. Watal recommends “defining a clear central repository and versioning for data and models, ensuring there’s a way to correlate what data was used to train which model version.”
Companies need clear policies on data retention and disposal to avoid holding onto unnecessary data for extended periods, which could increase the risk of breaches. Encryption measures protect data from unauthorized access and are essential for security. Audit data practices regularly to ensure that AI systems adhere to regulations and company policies. And finally, stay up to date with the latest cybersecurity best practices to defend against emerging threats.
Human Oversight
While AI can help employees make decisions, complete tasks faster, and improve completeness and accuracy, it should not replace human judgment entirely. Ensure your policies enforce comprehensive human oversight of AI systems and provide employees with a mechanism for appealing AI-generated decisions.
Better Policies are Key to Managing AI Risk and Opportunity
Reactions to AI often fall on one of two extreme sides of the spectrum: embracing it without limitations, or shutting it down entirely. Companies should look to strike a balance somewhere in the middle of these two responses — embracing the opportunity while understanding the risks, in particular, those around environmental, social, and governance concerns, and putting initial policies in place to minimize them.
It looks as though AI technology, in some form, is here to stay — but what you do now will determine how you fare down the road.
FiscalNote ESG can help you achieve your organization’s ESG goals through best-in-class intelligence and expertise. With global ESG advisory, strategy, research, analysis, and policy monitoring, you can stay on top of ESG-related politics, policy, and industry activity. Learn more today about how FiscalNote’s suite of ESG solutions can help your team.
ESG and Sustainability Made Easy
Learn more about how FiscalNote's suite of products can help you stay ahead of the latest ESG developments, disclosure, sustainability reporting, and climate risk management.