Home » Grace Yee, Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe – Interview Series

Grace Yee, Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe – Interview Series

Grace Yee is the Senior Director of Ethical Innovation (AI Ethics and Accessibility) at Adobe, driving global, organization-wide work around ethics and developing processes, tools, trainings, and other resources to help ensure that Adobe’s industry-leading AI innovations continually evolve in line with Adobe’s core values and ethical principles. Grace advances Adobe’s commitment to building and using technology responsibly, centering ethics and inclusivity in all of the company’s work developing AI. As part of this work, Grace oversees Adobe’s AI Ethics Committee and Review Board, which makes recommendations to help guide Adobe’s development teams and reviews new AI features and products to ensure they live up to Adobe’s principles of accountability, responsibility and transparency. These principles help ensure we bring our AI powered features to market while mitigating harmful and biased outcomes. Grace additionally works with the policy team to drive advocacy helping to shape public policy, laws, and regulations around AI for the benefit of society.

As part of Adobe’s commitment to accessibility, Grace helps ensure that Adobe’s products are inclusive of and accessible to all users, so that anyone can create, interact and engage with digital experiences. Under her leadership, Adobe works with government groups, trade associations and user communities to promote and advance accessibility policies and standards, driving impactful industry solutions.

Can you tell us about Adobe’s journey over the past five years in shaping AI Ethics? What key milestones have defined this evolution, especially in the face of rapid advancements like generative AI?

Five years ago, we formalized our AI Ethics process by establishing our AI Ethics principles of accountability, responsibility, and transparency, which serve as the foundation for our AI Ethics governance process. We assembled a diverse, cross-functional team of Adobe employees from around the world to develop actionable principles that can stand the test of time.

From there, we developed a robust review process to identify and mitigate potential risks and biases early in the AI development cycle. This multi-part assessment has helped us identify and address features and products that could perpetuate harmful bias and stereotypes.

As generative AI emerged, we adapted our AI Ethics assessment to address new ethical challenges. ​This iterative process has allowed us to stay ahead of potential issues, ensuring our AI technologies are developed and deployed responsibly. ​Our commitment to continuous learning and collaboration with various teams across the company has been crucial in maintaining the relevance and effectiveness of our AI Ethics program, ultimately enhancing the experience we deliver to our customers and promoting inclusivity. ​

How do Adobe’s AI Ethics principles—accountability, responsibility, and transparency—translate into daily operations? Can you share any examples of how these principles have guided Adobe’s AI projects?

We adhere to Adobe’s AI Ethics commitments in our AI-powered features by implementing robust engineering practices that ensure responsible innovation, while continuously gathering feedback from our employees and customers to enable necessary adjustments.

New AI features undergo a thorough ethics assessment to identify and mitigate potential biases and risks. When we introduced Adobe Firefly, our family of generative AI models, it underwent evaluation to mitigate against generating content that could perpetuate harmful stereotypes. This evaluation is an iterative process that evolves based on close collaboration with product teams, incorporating feedback and learnings to stay relevant and effective. We also conduct risk discovery exercises with product teams to understand potential impacts to design appropriate testing and feedback mechanisms. ​

How does Adobe address concerns related to bias in AI, especially in tools used by a global, diverse user base? Could you give an example of how bias was identified and mitigated in a specific AI feature?

We are continuously evolving our AI Ethics assessment and review processes in close collaboration with our product and engineering teams. ​The AI Ethics assessment we had a few years ago is different than the one we have now, and I anticipate additional shifts in the future. This iterative approach allows us to incorporate new learnings and address emerging ethical concerns as technologies like Firefly evolve.

For example, when we added multilingual support to Firefly, my team noticed that it wasn’t delivering the intended output and some words were being blocked unintentionally. To mitigate this, we worked closely with our internationalization team and native speakers to expand our models and cover country-specific terms and connotations. ​

Our commitment to evolving our assessment approach as technology advances is what helps Adobe balance innovation with ethical responsibility. By fostering an inclusive and responsive process, we ensure our AI technologies meet the highest standards of transparency and integrity, empowering creators to use our tools with confidence.

With your involvement in shaping public policy, how does Adobe navigate the intersection between rapidly changing AI regulations and innovation? What role does Adobe play in shaping these regulations?

We actively engage with policymakers and industry groups to help shape policy that balances innovation with ethical considerations. Our discussions with policymakers focus on our approach to AI and the importance of developing technology to enhance human experiences. Regulators seek practical solutions to address current challenges and by presenting frameworks like our AI Ethics principles—developed collaboratively and applied consistently in our AI-powered features—we foster more productive discussions. It is crucial to bring concrete examples to the table that demonstrate how our principles work in action and to show real-world impact, as opposed to talking through abstract concepts.

What ethical considerations does Adobe prioritize when sourcing training data, and how does it ensure that the datasets used are both ethical and sufficiently robust for the AI’s needs?

At Adobe, we prioritize several key ethical considerations when sourcing training data for our AI models. ​ As part of our effort to design Firefly to be commercially safe, we trained it on dataset of licensed content such as Adobe Stock, and public domain content where copyright has expired. We also focused on the diversity of the datasets to avoid reinforcing harmful biases and stereotypes in our model’s outputs. To achieve this, we collaborate with diverse teams and experts to review and curate the data. By adhering to these practices, we strive to create AI technologies that are not only powerful and effective but also ethical and inclusive for all users. ​

In your opinion, how important is transparency in communicating to users how Adobe’s AI systems like Firefly are trained and what kind of data is used?

Transparency is crucial when it comes to communicating to users how Adobe’s generative AI features like Firefly are trained, including the types of data used. It builds trust and confidence in our technologies by ensuring users understand the processes behind our generative AI development. By being open about our data sources, training methodologies, and the ethical safeguards we have in place, we empower users to make informed decisions about how they interact with our products. This transparency not only aligns with our core AI Ethics principles but also fosters a collaborative relationship with our users.

As AI continues to scale, especially generative AI, what do you think will be the most significant ethical challenges that companies like Adobe will face in the near future?

I believe the most significant ethical challenges for companies like Adobe are mitigating harmful biases, ensuring inclusivity, and maintaining user trust. ​The potential for AI to inadvertently perpetuate stereotypes or generate harmful and misleading content is a concern that requires ongoing vigilance and robust safeguards. For example, with recent advances in generative AI, it’s easier than ever for “bad actors” to create deceptive content, spread misinformation and manipulate public opinion, undermining trust and transparency.

To address this, Adobe founded the Content Authenticity Initiative (CAI) in 2019 to build a more trustworthy and transparent digital ecosystem for consumers. The CAI to implement our solution to build trust online– called Content Credentials. Content Credentials include “ingredients” or important information such as the creator’s name, the date an image was created, what tools were used to create an image and any edits that were made along the way. This empowers users to create a digital chain of trust and authenticity.

As generative AI continues to scale, it will be even more important to promote widespread adoption of Content Credentials to restore trust in digital content.

What advice would you give to other organizations that are just starting to think about ethical frameworks for AI development?

My advice would be to begin by establishing clear, simple, and practical principles that can guide your efforts. Often, I see companies or organizations focused on what looks good in theory, but their principles aren’t practical. The reason why our principles have stood the test of time is because we designed them to be actionable. When we assess our AI powered features, our product and engineering teams know what we are looking for and what standards we expect of them.

I’d also recommend organizations come into this process knowing it is going to be iterative. I might not know what Adobe is going to invent in five or 10 years but I do know that we will evolve our assessment to meet those innovations and the feedback we receive.

Thank you for the great interview, readers who wish to learn more should visit Adobe.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *