Companies Should Not Solely Rely on Younger Employees for AI Training: MIT, Harvard, Wharton Study
When training older employees on AI, conventional wisdom says that the younger generation, which is digital-native, can do a good job. However, a recent academic study casts doubt on this assumption. Learn more about the study.
- When training older employees on AI, the common assumption is that the younger generation, which is digital-native, can do a good job.
- However, a recent study by researchers from Harvard, Wharton, Harvard, and a few other institutions casts doubt on this assumption.
As more companies incorporate artificial intelligence (AI) tools and systems, conventional wisdom says that the tech-savvy and digital-native younger generation will take the lead in training older employees to use these powerful tools effectively. However, a new study by academics is casting a shadow of doubt on this assumption regarding generative AI.
The research conducted by academics from MIT, Harvard Business School, Wharton, and a few other institutions in collaboration with Boston Consulting Group found that younger employees who worked with generative AI systems made recommendations to mitigate risks that were contrary to advice made by experts. The findings indicate that organizations cannot rely just on this type of mentoring to ensure AI’s responsible usage.
The authors wrote, “Our interviews revealed two findings that run counter to the existing literature. First, the tactics that the juniors recommended to mitigate their seniors’ concerns ran counter to those recommended by experts in GenAI technology at the time, and so revealed that the junior professionals might not be the best source of expertise in the effective use of this emerging technology for more senior members.”
See more: How Growing AI Adoption is Shaping Future Workplaces, According to Aberdeen Data
Junior Employees Struggle With AI Risk Mitigation
Last year, the researchers surveyed 78 junior consultants who had recently accessed GPT-4 to solve a business problem. The consultants, who lacked AI expertise, shared the tactics they would recommend to the managers to alleviate concerns and risks. However, the study found that these tactics were grounded in a lack of good understanding of AI’s capabilities. They also focused on changing human behavior instead of AI system design and centered on project-level interventions instead of company or industry-wide solutions.
Noting AI’s superhuman capabilities, exponential rate of change, and reliance on vast amounts of data, the researchers wrote, “To explain how and when junior professionals may fail to be a source of expertise in the use of an emerging technology for more senior members, we must take into account not only status threat, but also risks to valued outcomes.”
The study comes at a time when organizations are grappling with the opportunities and challenges generative AI systems present. By highlighting the limitations of solely relying on younger employees to guide AI implementation from the bottom up, the study emphasizes the need for expert input, top-down AI governance, and upskilling across the organization.