Unified Artificial Intelligence

Noah Goodman and others at MIT have recently developed an A.I. based program for determining who would send e-mail to whom at a fictitious company.  However, it’s not the application itself that is of interest.  It’s the way that they carried out the computations.  The program relies on probabilistic rules that get updated over time.  It combines some of the original ideas of expert systems (using rules and implications) with the probabilistic approach that has been successful in recent years.

The research team admits that it doesn’t have a final solution on how this should be accomplished.  It currently is too computationally intensive.

Here are some random thoughts on this issue.

  1. This sounds like an ideal topic for O.R. researchers to get involved in.  We also have expertise in logic and probabilistic reasoning.
  2. Humans are terrible at dealing with probabilities but we are pretty good at forming categories and recognizing patterns.  Perhaps the A.I.  system could be improved by using “flawed human reasoning.”  For example, the system could be overconfident, just as humans are.  It could rely too heavily on recent information.  It could use very simple rules for updating probabilities.  It could rely heavily on other program’s expertise, just as we rely on what other people think.
  3. Perhaps the best way of developing a “unified approach” is to concurrently use several approaches to arrive at conclusions, and refer to each approach as “an expert.”  Then use a unifying approach that takes the expert opinions and arrives at a group opinion.  I think that this is already done, but perhaps in slightly different ways than I am suggesting.
Advertisements

2 Responses to “Unified Artificial Intelligence”

  1. Jake Says:

    I like the idea you have of “flawed human reasoning.” I think that is a interesting point.

  2. Andrew Stephen Drazdik Jr Says:

    Probability in socio-economics which national orgin has stakeholders views as causal agents in the market-place may ask experts such as Douglas Walton in A Pragmatic Theory of Fallacy. Univ. AL Press. 1995 pgs 140-143 “The conclusion only becomes known to be true as part of an inquiry” Argumentum ad Judicium. As I have also questioned Saaty on AI matrix based on random varibles with analytical hierarchy process as well, can elements of any strategy be normalized when the actors themselves account for corporate random varibles, such as a hedge fund manager with a margin for error and people as assets?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: