Elevating workplace safety through AI

The workforce goes from enhancing physical security to navigating AI safety challenges

AI’s capabilities empower businesses to focus on strategic tasks. RomanSamborskyi/Shutterstock

After a pandemic year, workplace safety has become a primary concern for businesses. Forbes discussed how employee well-being now supersedes attracting top talent as a priority for leaders. Dissatisfaction with safety measures is evident, with just 65 percent of workers satisfied with physical safety, a record low. Companies are turning to AI for enhanced workplace safety. 

AI improves security with biometric solutions and multi-factor authentication, identifies anomalies in data, streamlines security processes, and automates tasks like access control. While AI alone won’t ensure security, its capabilities empower businesses to focus on strategic tasks, making it a crucial part of improving workplace safety in the post-pandemic landscape.

Artificial intelligence (AI) systems pervade daily life, prompting the need for robust safety science in the field, as highlighted at a workshop by the Center for Advancing Safety of Machine Intelligence (CASMI) at Northwestern University. Attendees, representing diverse disciplines, discussed defining, measuring, and anticipating AI safety. The goal was to embrace a multidisciplinary approach to understand the potential harms across individuals, groups, and society. Keynote speaker Ben Shneiderman underscored the possibility of creating safer algorithmic systems despite acknowledging total safety as unattainable.

Related: Smart homes might be the smart choice

The workshop recognized that AI safety must be integral at every stage of development, as harm can emerge at individual, group, societal, and environmental levels. Researchers Sean McGregor and Kristian Lum presented insights, with McGregor advocating for mandatory reporting of AI incidents to foster proactive learning. Learning from fields like aeronautics, experts emphasised grounding AI research in safety methodologies.

Efforts are underway to cultivate a safety culture in machine learning, as exemplified by the AI Risk Management Framework and Shneiderman’s human-centred AI approach, aiming to empower users while considering social connectedness. The workshop was seen as a pivotal moment to drive substantial safety advancements in the AI landscape.

The Property Report editors wrote this article. For more information, email: [email protected].

Recommended