Artificial Intelligence (AI) is rapidly expanding and revolutionizing industries around the world. Incorporating AI is becoming essential for organizations looking to stay competitive, offering new opportunities to reach target audiences with efficiency. By enabling more strategic and data-driven approaches, there is potential to help organizations reach their target audiences with greater precision, efficiency, and personalization.
As AI evolves, communication professionals should consider its role in maintaining trust and transparency with its stakeholders. It’s essential to define how AI can enhance communication while preserving the human touch needed to build and maintain meaningful connections.
The Impact of AI in the Workplace
Before crafting an approach to responsible AI integration, it’s important to understand why AI is both a game-changer and a challenge for communication professionals. Research by McKinsey & Company suggests that 72% of businesses have adopted AI for at least one business function, which demonstrates the importance of designing a plan that keeps your company secure.
Haiilo’s AVA can help you create content up to 10x faster.
New tools- such as chatbots, generative AI (Gen AI) for content creation, and predictive analytics- are constantly emerging and transforming how we communicate. For example, when OpenAI’s ChatGPT launched in November 2022, it reached one million users within the first five days. Within two months, it had over 100 million registered users, sparking public curiosity and workplace discussions about the future. Thinking even further ahead, AI is expected to see an annual growth rate of 36.6% from 2024 to 2030, as reported by Grand View Research.
While AI has expanded beyond content creation, touching everything from coding to generating human voices, it’s greatest impact on communication lies in its ability to scale efforts, deliver more personalized content, and gather data more efficiently than ever before. It’s redefining what’s possible in communication, and offering opportunities for professionals to be more innovative in the workplace.
🔎 Check out: 9 Steps to Creating More Engaging Internal Content
Yet, with these advancements there are also great risks and challenges. AI is raising important questions about transparency, data security, and potential bias. While AI tools can enhance communication, it’s imperative that they are used in ways that preserve the human approach that builds trust with stakeholders.
The Balance Between Innovation and Ethics
As communication professionals incorporate AI into their strategies, they should also be mindful of the ethical responsibilities that come with it. It’s important to keep leadership informed on the considerations and consequences of adopting AI capabilities into company practices. According to research conducted by IBM Institute for Business Value, 79% of surveyed executives emphasize the importance of ethics in their enterprise-wide AI approach, but less than 25% of them have operationalized common principles of AI ethics.
While AI can help with automating tasks and supporting our efficiency, it cannot replace human connection, and that’s what communication is all about. Organizations should work with AI so that it can support, but not replace, authentic human interactions.
📚 Read More: Is it OK to Use AI Avatars in Employee Communication?
Expert Insights
To help determine how organizations can do this, we asked seven communication professionals to share their insights on how to responsibly integrate AI into communication strategies, while ensuring ethical standards, transparency, and human oversight remain the priority.
Here’s what the experts shared with us:
Bonnie Caver, SCMP
Founder and CEO of Reputation Lighthouse
A significant yet often overlooked focus in early-stage Responsible AI transitions is an organization’s sustainability and competitiveness, which includes responsibilities to shareholders, stakeholders, and the communities where business is done. Responsible AI implementation is a much bigger conversation than tools and efficiency.
As organizations look to their future in an AI-enabled environment, those that prioritize design thinking around how they will thrive and differentiate through the responsible use of AI will be the ones to come out far ahead of their competitors. AI cannot be looked upon as just another technology implementation that may be led by a department or division of a company. AI requires a holistic approach. Organizations must assess readiness, strategically plan, create a process for tool selection and adoption, prepare and train internal AND external stakeholders, implement responsibly, measure, and continue to innovate. To bring stakeholders along this incredible transformation journey requires organizations that communicate, learn, relate, align, and listen.
Responsible AI implementation navigates reputational risks, challenges to ESG initiatives and goals, security, privacy, governance, protection of intellectual property, and brand attacks – just for starters. For these reasons, Responsible AI implementation needs strong change and communication leadership to achieve success and avoid costly missteps.
Though the task ahead may be beyond comprehension at times, the reality is that it is uncharted territory for everyone. We can call upon our experiences from previous digital transformations and collaborate with peers through industry member organizations (like the International Association of Business Communicators and the Global Alliance for Public Relations and Communication Management Professionals.) As a starting point, in early 2024, the Global Alliance released the Six Global AI Guiding Principles for Responsible Communication, which are meant to provide ethical leadership for global communication professionals using artificial intelligence.
Adrian Cropley, OAM, FRSA, IABC Fellow, GCSCE, SCMP
Founding Director, Centre for Strategic Communication Excellence
Integrating AI into communication strategies is certainly a must in this new era, but as communication professionals, we must do it responsibly to keep things ethical and build trust. The key? A balance between intelligent tech and good old human oversight.
First, transparency is a must. People need to know when AI is used to assist in communication. Being transparent is absolutely no harm, even though it is assumed AI is being used to assist in idea generation or rewriting text. Being upfront about where and how AI is used helps ease concerns and build trust.
Next, we can’t forget about empathy and human connection. AI can automate tasks but should never replace personal, meaningful interactions. It’s essential to design AI systems that know when to step aside and let a real person take over, especially in sensitive situations. Be clear about where AI fits and for what tasks within your communication strategy.
Developing and following clear guidelines is critical to responsible use. AI systems and tools must respect privacy, follow data protection laws, and deal with issues like bias. Setting up a responsible AI guide—like the one from the Centre for Strategic Communication Excellence—can really help organizations stay on track and raise the value of the communication professional as an advisor.
If done right, AI can enhance strategic communication. Organizations can make the most of AI while maintaining trust and authenticity by staying transparent, ethical, and keeping a human touch.
Professor Emeritus Anne Gregory PhD, Hon. Fell. CIPR, FRSA, FHEA
Emeritus Chair of Corporate Communication
Huddersfield Business School
University of Huddersfield
The key to this is questions, questions… in three key areas. The first is about inputs: what is going into comms AI systems? Looking at the quality of the data that is being fed into LLMs – is it representative? Is it complete? Does it draw from a diverse range of sources? How can we ensure it is not biased? Does it infringe peoples’ privacy or copyright? Then there are questions about AI algorithms. How do they manipulate data? Are they transparent and explainable so we know what they are doing? How were they programmed? – programmers have biases too.
Then there are questions about outputs. Are the results legitimate? We know AI hallucinates. What decisions do the results suggest? What impacts, positive and negative will these decisions have on people? How do we mitigate negative or unintended consequences?
Finally there are questions about the ecosystem we are creating. What is the impact of our communication on society? Are we just noise or making society more informed and wholesome. What about the impact of AI assisted communication and the culture of our organisations? Does it build trust, what kinds of relationships are being built?
All these questions ensure that what we do is ethical and puts human well-being and control front and centre.
🧐 Consider reading: Best AI Prompts for Internal Communications [40+ ideas]
Mike Klein, MBA, SCMP, CIIC
Principal, Changing the Terms
How can organizations responsibly integrate AI into their communication strategies, ensuring ethical standards, transparency, and human oversight?
A lot depends on whether and how organizations choose to integrate AI (not just Generative AI but all AI) into their overall strategies – whether it’s an enabler, a differentiator or seen as competitive threat or something that devalues the organization’s offerings.
That overall view of AI can then be mirrored into the communication strategy and toolbox. If it’s an enabler, use it as an enabler and role model it’s use in the organization. If it’s a differentiator, then be ambitious and aggressive in seeking opportunities and tools, become highly proficient, and demonstrate your confidence. If it’s a competitive threat, double down on the non-AI origins of your activities and the way your initiate and interact.
Obviously, you need to be careful about Generative AI’s inbuilt weaknesses around data security and content accuracy, but aligning your comms AI strategy with your business AI strategy gives you an edge from a narrative perspective as well as using it as a tool for your own work.
Oliver Stelling
Strategic Communications Advisor & Author
The rise of AI shapes people’s future in this hyperconnected world. It has also become a key theme at the World Economic Forum and United Nations General Assembly meetings. Yet many PR professionals still don’t trust using AI. That makes it necessary for HR teams to identify communicators who will investigate in AI and how it can augment the development and implementation of communication strategies, while addressing and overcoming all challenges and risks of AI.
Collecting and interpreting perception and sentiment data in public relations, public affairs, and stakeholder communications is something that products by software developers such as Blackbird.ai, Propel, Signal AI, and Vuelio are specifically designed for. All that boosts information gathering and writing content.
But saving money by replacing humans is not future-ready as gaining real influence depends on people-to-people connectivity. In order to optimize perceptions and outcomes in corporate and government communications, communicators must continue to concentrate on genuine social interaction with local and also more multicultural audiences.
Convincing board members of taking this approach requires communicators’ pursuit of greater executive authority and a focus on wider research. Together this helps with the responsible and trustworthy integration of AI tools and techniques in communication strategies.
Mary Hills, MA, FCSCE, ABC, IABC Fellow, Six Sigma
Corporate & Graduate Academic Instructional Design and Development
A strategic, ecosystem approach to working with both generative and traditional AI provides organizations with guardrails to work with AI confidently. The Cisco AI Readiness Index and Assessment, Gartner’s Peer Insights Generative AI Apps Reviews and Ratings and “development sandboxes” provide excellent guardrails.
Additional Sources
- Circle of Fellows 107 – Raising AI The headline reads that Google’s Olympics ad went viral for all the wrong reasons and is fueling the growing concerns about an organization’s responsible use of generative AI in its communication.
- Applications of Generative AI and Traditional AI
Tiffany Markman
Keynote speaker, award-winning writer, founder of TMWT
As a speaker and writer, I’m often asked how I maintain originality when using AI. But this isn’t a creative challenge; it’s an ethical one. In my world, responsible AI use requires a blend of transparency, training and what I call ‘confident tweaking’.
Transparency
To prevent misunderstandings and build trust, my clients must know when I’ve used an AI tool or Large Language Model, so I’ve developed my own AI policy.
I disclose when and where an LLM is likely to be deployed, which specific LLM/s I use, and where the LLM stops and the human starts (that is, when I step in and take over and why I always do).
Training
Everyone who works for, with and around me is trained to use AI responsibly, because it doesn’t come naturally. Our training clarifies:
- Data exposure – What confidential information, proprietary ideas, trade secrets or sensitive data should never be entered into an LLM?
- Privacy – How can we safeguard customer details and business strategies? How can we use anonymized or dummy data?
- Ownership – Who owns which pieces of generated content? Me (the service provider), the client or the LLM provider?
Confident tweaking
Human communicators must be able to deftly massage LLM-generated output, to maintain the balance between automation, authenticity and nuance.
Part of becoming AI-literate is learning to pre-empt and remedy limitations and biases. For example, LLM copy might nail the grammar but fall flat on cultural sensitivity – and require confident human tweaking.
🌟 Read more expert insights: Trust but Verify: Navigating the Age of AI
Mitigating Risk
It’s clear that AI is transforming the communication profession, and the expert panelists share unique perspectives on how to use it responsibly. Let’s explore five approaches your organization can make to mitigate risk and maintain trust with your stakeholders.
1. Be Transparent
Why does it matter? Transparency is essential for maintaining trust, and stakeholders need to know when AI is being used. When your organization is up front about the way it’s using AI, it reduces concern and builds credibility.
What can you do? Communicate openly and clearly. For example, if you are using AI-generated content, make sure the recipient is made aware of that. It’s also helpful to develop a policy around how AI is being used and sharing that with stakeholders to demonstrate ethical practices.
2. Maintain Human Oversight
Why does it matter? AI is transforming the way we work and enhancing our processes, but it cannot replace human judgment or empathy. Trust is built on human interaction and meaningful relationships, and AI cannot replace that.
What can you do? To keep it simple, don’t replace human roles with AI systems. AI can be used for tasks and idea generation, and clear boundaries should be set for when AI is useful. When it comes to ethics and relationships, that requires human consideration.
3. Keep it Ethical
Why does it matter? There is a lot of risk that comes with AI use, so it’s important to build a culture where AI is used to enhance human capability without crossing ethical lines.
What can you do? Develop clear guidelines and policies and provide employees with the necessary training on how to use AI. They should understand how it’s acceptable to use those tools in the workplace, and how usage aligns with the organization’s ethical standards.
4. Prioritize Security
Why does it matter? This is one of the most common concerns with using AI and incorporating it into company culture. AI tools require data (lots of it) to function, which creates risks for organizations, especially concerning data breaches and misuse of confidential information.
What can you do? Ensure compliance with local laws and data protection regulations like GDPR or CCPA, and establish clear guidelines for data use in AI tools and systems. Be cautious about what data you’re feeding to AI and keep it as anonymous as possible. Regular audits of AI tools and processes can help mitigate risks and identify potential vulnerabilities.
5. Beware of Bias in AI
Why does it matter? One of the biggest concerns with AI is that it can inadvertently perpetuate bias, leading to unfavorable outcomes that can negatively impact company values. This can be particularly risky with content created and have real consequences for your organization.
What can you do? Again, this is where human oversight comes into play. Don’t trust AI to get it right. Review all content and check for signs of bias. Ensure that there is diversity in teams that are overseeing AI policies so that different perspectives are considered.
These five strategies can balance the opportunities that AI creates with the concerns and risks that come with it. By implementing these and using AI to support you, you can maintain a human approach to your communication strategies, better positioning your organization to build trust and credibility.
Conclusion
Leveraging AI to enhance efficiency, insights, and personalization are all benefits of integrating these tools into your communication strategy, but it’s important to be mindful of the risks. To ensure responsible use, organizations should be transparent, maintain human oversight, uphold ethical standards, safeguard security and be vigilant about bias and credibility.
AI is a great tool to support communication, but it cannot replace human connection or relationships. Aligning AI with ethical frameworks and creating meaningful, human-centered interactions can allow organizations to harness these advancements while preserving trust and authenticity in its communication. A balanced approach can enable companies to leverage the benefits of AI while mitigating risks to ensure sustainable and responsible use.
🤝 Continue Reading: Expert Panel: Building a Culture of Belonging through Diversity, Equity, and Inclusion in the Workplace