Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
4 Ways Machine Learning Can Perpetuate Injustice and What to Do About It
 Originally published in Built In, July 12, 2024 When ML...
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
SHARE THIS:

4 years ago
Explainable Machine Learning, Model Transparency, and the Right to Explanation

 

Check out this topical video from Predictive Analytics World founder Eric Siegel:

A computer can keep you in jail, or deny you a job, a loan, insurance coverage, or housing – and yet you cannot face your accuser. The predictive models generated by machine learning to drive these weighty decisions are generally kept locked up as a secret, unavailable for audit, inspection, or interrogation. The video above covers explainable machine learning and the loudly-advocated machine learning standards transparency and the right to explanation. Eric discusses why these standards generally are not met and overviews the policy hurdles and technical challenges that are holding us back.

About the Author

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who makes machine learning understandable and captivating. He is the founder of the Predictive Analytics World and Deep Learning World conference series, which have served more than 17,000 attendees since 2009, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery”, a popular speaker who’s been commissioned for more than 110 keynote addresses, and executive editor of The Machine Learning Times. He authored the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at more than 35 universities, and he won teaching awards when he was a professor at Columbia University, where he sang educational songs to his students. Eric also publishes op-eds on analytics and social justice. Follow him at @predictanalytic.

4 thoughts on “Explainable Machine Learning, Model Transparency, and the Right to Explanation

  1. Pingback: Explainable Machine Learning, Model Transparency, and the Right to Explanation « Machine Learning Times – NikolaNews

  2. The Mazda CX-9 is generally well-regarded for its stylish design, smooth handling, and spacious interior, making it a popular choice in the midsize SUV market. However, like any vehicle, it is not without its potential issues. Some owners have reported concerns with the infotainment system and occasional glitches, though software updates may address these issues. Additionally, there have been isolated reports of transmission problems, but these seem to be relatively uncommon. As with any car, regular maintenance and timely servicing can help mitigate potential issues. Overall, the Mazda CX-9 has received positive reviews for its performance and features, but it’s always recommended to research the specific model year and consult consumer reviews for the most up-to-date information on any potential issues.
    https://www.autonationx.com/mazda-cx-9-years-to-avoid/

     

Leave a Reply