LATEST NEWS

New process to help tech firms develop safer AI

Safety experts at the University of York develop first standardised procedure to help assure the safety of robots, delivery drones, self-driving cars, or any products that use machine learning.

A team of UK computer scientists has developed a ground-breaking process to help make machine learning (ML) for autonomous technologies safe.

The methodology helps engineers build a safety case that explicitly and systematically establishes confidence in the ML long before ending up in the hands of everyday users.

As robots, delivery drones, smart factories and driverless cars become increasingly part of our lives, current safety regulations for autonomous technologies present a grey area. Global guidelines for autonomous systems are not as stringent compared to other high-risk technologies. Current standards often lack detail, meaning new technologies that use AI and ML to improve our lives are potentially unsafe when they go to market.

Developed by the Assuring Autonomy International Programme (AAIP) at the University of York, this new guidance is called the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). The AAIP worked with industry experts to develop the process, which systematically integrates safety assurance into the development of ML components.

Dr Richard Hawkins, Senior Research Fellow and one of the authors of AMLAS, said: “The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place. Sectors everywhere struggle to develop new guidelines fast enough to ensure that robotics and autonomous systems are safe for people to use.

“If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident.”

Machine Learning in Healthcare

The AMLAS methodology has already been used in several applications, including transport and healthcare. In one of its healthcare projects, AAIP is working with NHS Digital, the British Standards Institution, and Human Factors Everywhere to use AMLAS to help create resources that support manufacturers to meet the regulatory requirements for their ML healthcare tools.

Dr Ibrahim Habli, Reader at the University of York and another of the authors, said: “Although there are many standards related to digital health technology, there is no published standard addressing specific safety assurance considerations. There is little published literature supporting the adequate assurance of AI-enabled healthcare products.

“AMLAS bridges a gap between existing healthcare regulations, which predate AI and ML, and the proliferation of these new technologies in the domain.”

Independent

An independent, neutral broker, AAIP connects businesses, academic research, regulators, and the insurance and legal professions to write new guidelines for safe AI, robotics, and autonomous systems.

“AMLAS can help any business or individual with a new autonomous product to systematically integrate safety assurance into the development of the ML components. We show how you can provide a persuasive argument about your ML model to feed into your system safety case. Our research helps us understand the risks and limits to which autonomous technologies can be shown to perform safely,” said Dr Hawkins.

“At York, we have a vast body of research into the best practices and processes for gathering evidence that can appraise the safety of these new complex technologies. We train people in the safe design, assessment and use of robotics and autonomous systems.

“Without a compelling argument about your ML model to feed into a system safety case, it is hard to assure the safety of your system. Developers can use our provided patterns, follow the process and instantiate their safety argument.”

The AAIP is a safety assurance group at the University of York and works in partnership with Lloyd’s Register Foundation, the charitable arm of Lloyd’s Register, which is dedicated to engineering a safer world.

To access the Assurance of Machine Learning for use in Autonomous Systems guidance, click the following link www.assuringautonomy.com.


SIVAN

Recent Posts

New Power Module Enhances AI Data Center Power Density and Efficiency

Microchip’s MCPF1525 power module with PMBus™ delivers 25A DC-DC power, stackable up to 200A The…

2 days ago

Datarails Launches Spend Control to Give CFOs Full Visibility on Contracts and Eliminate Zombie Subscriptions

New AI-powered platform – the first with full ERP integration – includes an AI agent…

2 days ago

AccuLine reports 94% sensitivity in clinical trial of its 4-minute cardiac diagnostic system

The study validated the CORA system’s ability to rule out coronary artery disease with a…

4 days ago

Factify Raises $73M to Kill the PDF and Build a New Document Standard for AI

Factify replaces static PDFs with authoritative, intelligent records that allow AI to take charge of…

4 days ago

Mesh Security Raises $12 Million Series A to Power Autonomous Execution for Cybersecurity Mesh at Enterprise Scale

Mesh Security, the company delivering the world’s first Cybersecurity Mesh Architecture (CSMA) platform, today announced…

7 days ago

NetZero Tech Ventures Spotlights Strategic Reset in Climate and Energy Investments

New review by the investment firm examines how climate-tech investors are pivoting toward reliability, AI…

1 week ago