AI at work: An enterprise-wise focus
Leveraging AI in the workplace remains a leading strategic priority, fueled by both top-down leadership initiatives and growing employee expectations. At the leadership level, AI has been a central to corporate planning, with Gartner revealing that 68% of companies intend to develop strategy for integrating human employees and AI agents. Many executives are setting bolder ambitions, with one in three companies planning to achieve fully automated operations in functions such as logistics, product design, and contract management [1]. As AI adoption moves beyond experimental phase, organizations are shifting focus from usage metrics to performance outcomes. Developing ROI measurement now becomes a standard, with 72% companies reporting formal ROI tracking and half of them taking a step further by continuously monitoring, optimizing, and embedding these metrics into corporate strategy and planning [2].
But the surge in AI adoption at work isn’t driven by leadership alone; it’s also powered by employees themselves. Indeed, according to McKinsey & Company research, AI is gaining attention from almost everyone, even those most skeptical of the technology. The study shows that 71% of the most AI-skeptical employees are familiar with Gen AI tools while half of them feel comfortable using these tools at work [3]. The research also reveals another surprising result, reporting that employees are using AI more than their employers’ estimation. While leaders estimate that only 4% of their employees use Gen AI for at least 30% of daily work, self-reported data from employees puts the figure at 13%, more than triple that estimate.
AI is actively leveraged across industries and functions, driving notable productivity boost in certain areas. Content generation and fraud detection stand out as the leading generative AI use cases in marketing, sales, and operations, respectively [2], with real-life companies already reaping the benefits. For instance, a leading international insurance group implemented FPT’s iSuite, a comprehensive AI solution package for insurance, which included deploying an AI assistant on its digital platform for insurance agents. The AI assistant supports agents with daily operations such as task management, lead engagement scripting, and follow-up recommendations for lead conversion. As a result, agent productivity increased by 20%, contributing to an 18% rise in revenue and a 30% reduction in operational costs Furthermore, iSuite’s AI-driven fraud detection feature analyzes claims patterns and submitted documents to flag potential fraudulent activities. This application has helped the insurance increase fraud detection rate by 50% and reduce fraud-related loss by 18%.
“Workslop” and pitfalls to avoid
“AI-generated workslop” - a term coined by researchers at the Harvard Business Review, refers to them as content that “masquerades as good work but lacks the substance to meaningfully advance a given task” [4]. According to the research, as many as 40% of US full-time employees have faced such low-quality AI-generated work, forcing them to spend additional time cleaning up, revising, and interpreting the content. This means “AI-generated workslop” not only hinders productivity boost, it’s actually “destroying productivity” – as the research puts it. The study estimates that for a company with 10,000 employees, the lost productivity could amount to $9 million annually if each employee spends 1 hour and 56 minutes cleaning up these workslop [5]. And the consequences are not merely theoretical. In reality, Deloitte recently agreed to partially refund AUD 290,000 to the Australian government after errors were found in an AI-assisted report, including a fabricated quotation and references to non-existent research sources [6].
“Workslop” isn’t the only concern companies and employees should keep in mind when using AI in the workplace. Another critical issue is AI bias, which refers to the biased outcomes generated by the technology that can be unfair for certain groups of people. Such bias can stem from multiple factors, including imbalanced training data, flawed algorithmic design, or the unconscious biases of human developers. A real-world example occurred in 2018 when Amazon discontinued a project to use AI for job application review over fears of discrimination against female applicants. The AI model used Amazon’s submitted resumes in the last 10 years as training data, which were mostly male applicants. As a result, the algorithm developed a pattern to prefer applications from man and gave lower rating for resumes that included the word “woman” [7].
Productivity means more than speed and quantity
AI unquestionably improves productivity: it streamlines workflows, automates tasks, and generates content at an unprecedented level of speed but not with absolute accuracy. Most productivity metrics today focus on the speed and quantity of work that AI completes, but those alone are insufficient. Organizations must evaluate AI performance beyond usage rates by developing clear metrics for quality and accuracy of AI-generated outputs. Having a well-established policy on AI usage is highly essential, with clear guidelines on what platforms are permitted, what data can be shared, and how AI-generated content should be reviewed. Without such governance, AI adoption risks becoming uncontrolled. With as many as 63% of surveyed software developers using unauthorized tools, security and ethical risks are increasingly apparent [8].
Training serves as another critical enabler. But despite widespread attention to AI, employee training remains limited, with only 29% of U.S. workers reporting to receive adequate AI training from their employers [3]. Increasing the quantity of training is important, but improving its quality is even more vital. Effective AI training should go beyond prompt creation to help employees understand AI’s limitations, recognize potential risks, and learn how to mitigate errors and biases in practice.
Finally, companies should clearly define the role of AI in their operations. AI, despite how advanced and autonomous it can get, is nothing more than just a tool. The responsibility to use them accurately and ethically lies with humans. That means Companies should establish frameworks for governing AI usage, defining when human judgment must intervene and how responsibility is distributed. Maintaining human oversight is essential to ensure AI-generated outputs align with corporate visions, ethical standards, and regulatory requirements.