Judul : The Shapley value and explainable machine learning
link : The Shapley value and explainable machine learning
The Shapley value and explainable machine learning
Machine learning via deep neural nets is famously a black box approach to prediction, but efforts are being made to open the black box and explain why a given prediction was made, using the Shapley value.
Here's a story from Datanami:
December 9, 2019
Real Progress Being Made in Explaining AI, by Alex Woodie
"Google made headlines several weeks ago with the launch of Google Cloud Explainable AI. Explainable AI is a collection of frameworks and tools that explain to the user how each data factor contributed to the output of a machine learning model.
Here's a story from Datanami:
December 9, 2019
Real Progress Being Made in Explaining AI, by Alex Woodie
"Google made headlines several weeks ago with the launch of Google Cloud Explainable AI. Explainable AI is a collection of frameworks and tools that explain to the user how each data factor contributed to the output of a machine learning model.
“These summaries help enterprises understand why the model made the decisions it did,” wrote Tracy Frey, Google’s director of strategy for Cloud AI, in a November 21 blog post. “You can use this information to further improve your models or share useful insights with the model’s consumers.”
"Google’s Explainable AI exposes some of the internal technology that Google created to give its developers more insight into how its large scale search engine and question-answering systems provide the answers they do. These frameworks and tools leverage complicated mathematical equations, according to a Google white paper on its Explainable AI.
"One of the key mathematical elements used is Shapley Values, which is a concept created by Nobel Prize-winning mathematician Lloyd Shapley in the field of cooperative game theory in 1953. Shapley Values are helpful in creating “counterfactuals,” or foils, where the algorithm continually assesses what result it would have given if a value for a certain data point was different.
...
“The main question is to do these things called counterfactuals, where the neural network asks itself, for example, ‘Suppose I hadn’t been able to look at the shirt colour of the person walking into the store, would that have changed my estimate of how quickly they were walking?'” Moore told the BBC last month following the launch of Explainable AI at an event in London. “By doing many counterfactuals, it gradually builds up a picture of what it is and isn’t paying attention to when it’s making a prediction.”
Such is the article The Shapley value and explainable machine learning
That's an article The Shapley value and explainable machine learning this time, hopefully it can benefit you all. well, see you in other article postings.
You are now reading the article The Shapley value and explainable machine learning with the link address https://americanworkerslooking.blogspot.com/2019/12/the-shapley-value-and-explainable.html
0 Response to "The Shapley value and explainable machine learning"
Post a Comment