Close button
The Guardian
Email YouTube Facebook Instagram Twitter WhatsApp
x

Advances in application of Artificial Intelligence

Related

How Artificial Intelligence (AI) is revolutionizing healthcare delivery CREDIT: NetObjex

•Breakthrough gives early warning of ozone issues as WHO releases first global report on concept, six guiding principles for its design, use

Scientists have recorded major breakthroughs in the application of Artificial Intelligence (AI) in health, weather forecasting and other areas of science.

Scientists at the University of Houston’s (UH’s) Air Quality Forecasting and Modeling Lab have developed a new artificial intelligence system that could lead to improved ways to control high ozone problems and even contribute to solutions for climate change issues.

x

The breakthrough, published online in the scientific journal, Scientific Reports-Nature, showed ozone levels in the earth’s troposphere (the lowest level of the atmosphere) can now be forecast with accuracy up to two weeks in advance, a remarkable improvement over current systems that can accurately predict ozone levels only three days ahead.

Professor of atmospheric chemistry and AI deep learning at UH’s College of Natural Sciences and Mathematics, Yunsoo Choi, said: “This was very challenging. Nobody had done this previously. I believe we are the first to try to forecast surface ozone levels two weeks in advance.”

Ozone, a colourless gas, is helpful in the right place and amount. As a part of the earth’s stratosphere (“the ozone layer”), it protects by filtering out Ultra Violet (UV) radiation from the sun. But when there are high concentrations of ozone near earth’s surface, it is toxic to lungs and hearts.

A researcher in Choi’s lab, the first author of the research paper and a doctoral student, Alqamah Sayeed, said: “Ozone is a secondary pollutant, and it can affect humans in a bad way. Exposure can lead to throat irritation, trouble breathing, asthma, and even respiratory damage. Some people are especially susceptible, including the very young, the elderly and the chronically ill.” Ozone levels have become a frequent part of daily weather reports. But unlike weather forecasts, which can be reasonably accurate up to 14 days ahead, ozone levels have been predicted only two or three days in advance- until this breakthrough. The vast improvement in forecasting is only one part of the story of this new research. The other is how the team made it happen. Conventional forecasting uses a numerical model, which means the research is based on equations for the movement of gasses and fluids in the atmosphere. The limitations were obvious to Choi and his team. The numerical process is slow, making results expensive to obtain, and accuracy is limited. “Accuracy with the numerical model starts to drop after the first three days,” Choi said. The research team used a unique loss function in developing the machine-learning algorithm. A loss function helps in optimisation of the AI model by mapping decision to their associated costs. In this project, researchers used index of agreement, known as IOA, as the loss function for the AI model over conventional loss functions. IOA is a mathematical comparison of gaps between what is expected and how things actually turn out.

In other words, team members added historical ozone data to the trials as they gradually refined the programme’s reactions. The combination of the numerical model and the IOA as the loss function eventually enabled the AI algorithm to accurately predict outcomes of real-life ozone conditions by recognising what happened before in similar situations. It is much like how human memory is built.

x

Meanwhile, according to new World Health Organisation (WHO) guidance published last week, Artificial Intelligence holds great promise for improving the delivery of healthcare and medicine worldwide, but only if ethics and human rights are put at the heart of its design, deployment, and use.

The report, Ethics and governance of artificial intelligence for health, is the result of two years of consultations held by a panel of international experts appointed by WHO.

WHO Director-General, Dr. Tedros Adhanom Ghebreyesus, said: “Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm.”

“This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

Artificial intelligence can be, and in some wealthy countries is already being used to improve the speed and accuracy of diagnosis and screening for diseases; to assist with clinical care; strengthen health research and drug development, and support diverse public health interventions, such as disease surveillance, outbreak response, and health systems management.

AI could also empower patients to take greater control of their own health care and better understand their evolving needs. It could also enable resource-poor countries and rural communities, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.

However, WHO’s new report cautions against overestimating the benefits of AI for health, especially when this occurs at the expense of core investments and strategies required to achieve universal health coverage.

It also points out that opportunities are linked to challenges and risks, including unethical collection and use of health data; biases encoded in algorithms, and risks of AI to patient safety, cyber security, and the environment.

x

For example, while private and public sector investment in the development and deployment of AI is critical, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful commercial interests of technology companies or the interests of governments in surveillance and social control.

The report also emphasises that systems trained primarily on data collected from individuals in high-income countries may not perform well for individuals in low- and middle-income settings.

AI systems should therefore be carefully designed to reflect the diversity of socio-economic and health-care settings. They should be accompanied by training in digital skills, community engagement and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision-making and autonomy of providers and patients.

Ultimately, guided by existing laws and human rights obligations, and new laws and policies that enshrine ethical principles, governments, providers, and designers must work together to address ethics and human rights concerns at every stage of an AI technology’s design, development, and deployment.

To limit the risks and maximise the opportunities intrinsic to the use of AI for health, WHO provides the following principles as the basis for AI regulation and governance:

•Protecting human autonomy: In the context of health care, this means that humans should remain in control of health-care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.

•Promoting human well-being and safety and the public interest. The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.

•Ensuring transparency, explainability and intelligibility. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.

x

•Fostering responsibility and accountability. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that appropriately trained people use them under appropriate conditions and. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.

•Ensuring inclusiveness and equity. Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.

•Promoting AI that is responsive and sustainable. Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health-care workers to adapt to the use of AI systems, and potential job losses due to use of automated systems.

These principles will guide future WHO works to support efforts to ensure that the full potential of AI for healthcare and public health will be used for the benefits of all.

x


Receive News Alerts on Whatsapp: +2348136370421

No comments yet