The U.S. Department of Labor has put forth best practices on artificial intelligence in the workplace.
The principles primarily focus around worker upskilling and protection, as well as cybersecurity and safety and come at a time when many companies have already begun implementing AI systems or partnering with third-party vendors to power a slew of different use cases.
The guidelines, which come as a result of President Biden’s AI-related executive order, published in October 2023, are not legally binding; rather, they are meant to provide a helpful framework to industries and companies.
Unlike some other AI-centric legislation and guidelines, the DOL’s principles include actionable items for companies working in the fashion, apparel, supply chain and logistics industries. In many cases, other best practices and legislative proposals have more directly targeted highly sensitive industries like technology, healthcare, insurance, defense and financial services.
Nonetheless, other industries that also covet highly sensitive—and highly valuable—consumer data that can be fed into AI models have pushed forth with using the technology, readying themselves for the applicable portions of laws like the EU AI Act and abiding by the necessary provisions slated up for them by the Biden administration’s EO.
Because the DOL’s principles are more industry agnostic than some of the highly specific protective legislation that has come out around developers of AI models and the industries they have found primary use cases in, fashion, apparel and retail companies may be able to find a stronger sense of guidance in the suggestions put forth.
Though the recommendations paint a more broad-strokes picture of technology’s impact on the workforce, developers are still directly referenced in one of the DOL’s principles—which stipulates that those developing AI models should do so in a way that “protects workers’ civil rights [and] mitigate[s] risks to workers’ safety.” All systems that could impact workers, it goes on to note, should be required to have consistent human oversight, as well as independent auditing to verify their potential effects on the workforce align with what the technology was meant to do.
At large, the guidelines can be applied more widely.
The DOL calls “centering worker empowerment” its “North Star” when it comes to the new best practices. It suggests that, “Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use and oversight of AI systems for use in the workplace.”
For many companies, worker empowerment with technology has long been on the agenda; company executives continue to insist that the technology they implement allows employees to focus on tasks that require more critical thinking and strategic planning by automating low-value, repetitive tasks. That attitude ties in with another of the DOL’s principles, which recommends that AI should be used only to enable workers, which, in turn, could improve their job quality.
In that same vein, the DOL has recommended that employers should be required to upskill workers if their job changes because of AI, which could include working to “provide workers with appropriate training opportunities to learn how to use AI systems,” “prioritize retraining and reallocating workers displaced by AI to other jobs within the organizations whenever feasible” and more. If employers adopt this best practice, it could begin to assuage some workers’ concerns over job loss caused by automation, AI and other emerging technologies.
“We should think of AI as a potentially powerful technology for worker well- being, and we should harness our collective human talents to design and use AI with workers as its beneficiaries, not as obstacles to innovation. AI’s promise of a better world cannot be fulfilled without making it a better world for workers,” Julie Su, active secretary of the DOL, wrote in a statement.
While many of the DOL’s recommendations center around helping workers adjust, the agency also places an urgent onus on protecting workers’ rights, data and privacy.
The principles advocate for stronger transparency around AI will be used in the workplace—both for current employees and for job seekers. The department specifically calls for advance notice of “worker-impacting AI,” which it defines as “AI that has the potential to significantly impact workers.” Applicable use cases may include using AI for employee monitoring, leveraging AI for hiring processes and more.
Along with letting employees know that they may be monitored, the DOL recommends that employers communicate in an accessible way the type of data they will be collecting via AI systems, or allowing an AI system to ingest, about their employees—as well as where the data will be stored and how it will be used.
The provisions also note that employers have a strong responsibility to protect worker data by putting solid security measures in place and not sharing the data outside the immediate organization without consent. Perhaps most importantly, it calls on employers not to collect data for data’s sake. The best practices state, “Employers should avoid collection, retention and other handling of worker data that is not necessary for a legitimate and defined business purpose.”
Keeping data private and only collecting what’s absolutely necessary could help employers avoid legal issues; the DOL calls on companies to ensure that their use of AI doesn’t interfere with employees’ legal rights to organize, nor does it violate anti-discrimination and anti-retaliation legislation.
Ensuring that technology doesn’t veer into dangerous territory often falls to compliance and governance teams. The DOL makes clear that all organizations using AI should put into place clear governance systems that place checks and balances on AI systems, so that they always have an appropriate level of human oversight.
Su said companies’ willingness to follow these principles—and to keep a mindful eye on AI’s broader impact—could determine the path forward for AI and other emerging technologies.
“The Department of Labor will remain vigilant in protecting workers from the potential harms of AI, while at the same time, recognizing that this is a moment of tremendous opportunity. Whether AI in the workplace creates harm for workers and deepens inequality or supports workers and unleashes expansive opportunity depends (in large part) on the decisions we make,” she wrote. “The stakes are high. But with these best practices and principles, guided by President Biden’s and Vice President Harris’s leadership, we can seize this moment and promote innovation and prosperity for all.”