Noah Goodman and others at MIT have recently developed an A.I. based program for determining who would send e-mail to whom at a fictitious company. However, it’s not the application itself that is of interest. It’s the way that they carried out the computations. The program relies on probabilistic rules that get updated over time. It combines some of the original ideas of expert systems (using rules and implications) with the probabilistic approach that has been successful in recent years.
The research team admits that it doesn’t have a final solution on how this should be accomplished. It currently is too computationally intensive.
Here are some random thoughts on this issue.
- This sounds like an ideal topic for O.R. researchers to get involved in. We also have expertise in logic and probabilistic reasoning.
- Humans are terrible at dealing with probabilities but we are pretty good at forming categories and recognizing patterns. Perhaps the A.I. system could be improved by using “flawed human reasoning.” For example, the system could be overconfident, just as humans are. It could rely too heavily on recent information. It could use very simple rules for updating probabilities. It could rely heavily on other program’s expertise, just as we rely on what other people think.
- Perhaps the best way of developing a “unified approach” is to concurrently use several approaches to arrive at conclusions, and refer to each approach as “an expert.” Then use a unifying approach that takes the expert opinions and arrives at a group opinion. I think that this is already done, but perhaps in slightly different ways than I am suggesting.