top of page
Writer's pictureSatish Naidu

AI You Can Trust


At DataRobot, we strongly believe AI is a force for good. From the way deep learning algorithms can create art and writing to its applications for health and medicine, as well as astonishing head-to-head match-ups between AIs and humans in games, AI has made enormous strides towards imitating human behavior.

At the same time, we recognize that AI is a nascent technology. There are numerous examples of AI systems that operate in ways that are undesirable or unexpected. This raises important questions about how to safeguard against bias and discrimination, as well as how to build reliable controls.

Rather than nebulously negotiating “good” or “bad” AI, we’d rather take a different approach to categorizing AI: determining AI that is worthy of our trust and AI that is not.

Here are some characteristics and examples of AI that is not trustworthy.

  • “Black box” algorithms are not suitable for high-stakes applications in our decision-making. AI should be explainable. Numerous techniques and alternative modeling approaches exist to support better interpretability. 

  • Models with target leakage are also an issue; these models have access to information that is not available at the time of prediction, which results in a great overestimate of their accuracy. 

  • Models that are trained on data that is lacking diversity will be under-equipped to perform accurately for underrepresented groups. Other models may then be built on this inaccurate data or may be scoring data with integrity issues. 

  • Finally, untrustworthy AI is biased. In the worst examples, no one seems to know until the day the headline breaks and a vulnerable group has been disproportionately impacted. This should not be tolerated; beyond accountability, AI should be held to higher standards and testing before it is put into production.


At DataRobot, we know how important these problems are, which is why every day, we ask ourselves what is needed to build AI that earns your trust. DataRobot was founded to help organizations make better data-driven decisions with AI. After years of close partnership with private and public sector customers, we are at the forefront of understanding and developing what is required to make AI trustworthy and responsible. 

Our models are built to be explainable and transparent, supporting end-to-end auditability, with multiple tools available to evaluate model performance and derive insights. Robust target leakage detection and other guardrails are baked into the platform. Similarly, data quality assessments provide a quick but comprehensive evaluation of each dataset uploaded for modeling. Various visualization tools and metrics for exploratory data analysis further enable the user to dig deeper into their data, its integrity, validity and diversity. Best-in-class MLOps capabilities, including data drift, service health, and accuracy tracking–in addition to the recently announced Humble AI feature that protects users at the level of a single prediction–all help to ensure the translation of the projected value of a well-crafted model into the real-time predictions made.

Today, the DataRobot platform is trusted by many of the largest and most complex organizations in the world, including customers in regulated industries like healthcare, banking, and insurance with the most stringent data privacy and protection regimes. We continuously work to earn that trust and innovate. Our specialized Trusted AI Team of data scientists, academics and engineers focuses on some of the most complicated and relevant questions of AI today, developing statistical techniques and tools to assess bias, model confidence, and robustness across multiple dimensions and phases of the machine learning lifecycle. At DataRobot, we know the mandate of trusted and ethical AI is integral to realizing its full potential impact.

2 views0 comments

Comments


Post: Blog2_Post
bottom of page