H2O.ai Blog
Filter By:
14 results Category: Year:Open-Weight AI Models: A Path to Responsible Innovation
The recent Request for Comments (RFC) issued by the National Telecommunications and Information Administration (NTIA) on open-weight AI models has sparked an important conversation about the future of AI. As we consider the potential benefits and risks associated with making AI model weights more accessible and transparent, it is clear ...
Read moreTesting Large Language Model (LLM) Vulnerabilities Using Adversarial Attacks
Adversarial analysis seeks to explain a machine learning model by understanding locally what changes need to be made to the input to change a model’s outcome. Depending on the context, adversarial results could be used as attacks, in which a change is made to trick a model into reaching a different outcome. Or they could be used as an exp...
Read moreA Brief Overview of AI Governance for Responsible Machine Learning Systems
Our paper “A Brief Overview of AI Governance for Responsible Machine Learning Systems” was recently accepted to the Trustworthy and Socially Responsible Machine Learning (TSRML) workshop at NeurIPS 2022 (New Orleans). In this paper, we discuss the framework and value of AI Governance for organizations of all sizes, across all industries a...
Read moreUsing AI to unearth the unconscious bias in job descriptions
“Diversity is the collective strength of any successful organization Unconscious Bias in Job DescriptionsUnconscious bias is a term that affects us all in one way or the other. It is defined as the prejudice or unsupported judgments in favor of or against one thing, person, or group as compared to another, in a way that is usually con...
Read moreH2O Driverless AI 1.9.1: Continuing to Push the Boundaries for Responsible AI
At H2O.ai, we have been busy. Not only do we have our most significant new software launch coming up (details here ), but we also are thrilled to announce the latest release of our flagship enterprise platform H2O Driverless AI 1.9.1. With that said, let’s jump into what is new: Faster Python scoring pipelines with embedded MOJOs for r...
Read moreThe Importance of Explainable AI
This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now...
Read moreBuilding an AI Aware Organization
Responsible AI is paramount when we think about models that impact humans, either directly or indirectly. All the models that are making decisions about people, be that about creditworthiness, insurance claims, HR functions, and even self-driving cars, have a huge impact on humans. We recently hosted James Orton, Parul Pandey, and Sudala...
Read moreThe Challenges and Benefits of AutoML
Machine Learning and Artificial Intelligence have revolutionized how organizations are utilizing their data. AutoML or Automatic Machine Learning automates and improves the end-to-end data science process. This includes everything from cleaning the data, engineering features, tuning the model, explaining the model, and deploying it into p...
Read more3 Ways to Ensure Responsible AI Tools are Effective
Since we began our journey making tools for explainable AI (XAI) in late 2016, we’ve learned many lessons, and often the hard way. Through headlines, we’ve seen others grapple with the difficulties of deploying AI systems too. Whether it’s: a healthcare resource allocation system that likely discriminated against millions of black peop...
Read more5 Key Considerations for Machine Learning in Fair Lending
This month, we hosted a virtual panel with industry leaders and explainable AI experts from Discover, BLDS, and H2O.ai to discuss the considerations in using machine learning to expand access to credit fairly and transparently and the challenges of governance and regulatory compliance. The event was moderated by Sri Ambati, Founder and CE...
Read moreFrom GLM to GBM – Part 2
How an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better ModelIntroductionIn Part 1 , we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can mat...
Read moreFrom GLM to GBM - Part 1
How an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old ProblemIntroductionInsurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians i...
Read moreBrief Perspective on Key Terms and Ideas in Responsible AI
INTRODUCTIONAs fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge. In this short po...
Read moreSummary of a Responsible Machine Learning Workflow
A paper resulting from a collaboration between H2O.AI and BLDS, LLC was recently published in a special “Machine Learning with Python” issue of the journal, Information (https://www.mdpi.com/2078-2489/11/3/137). In “A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing...
Read more